Artificial Intelligence: Conflicts of interest between Ethics and the needs of an adapted…

Emmanuelle ROBIN

Are we at the dawn of a terrifying upheaval generated by the merger of Biotech and Infotech or are these forecasts still an example of hysteria without real foundations given the lack of certainty, even in the mid-term? Where are we in terms of regulation and ethics for the protection of the arms race and Human Rights?

In any case, we are witnessing a certain effervescence in terms of AI regulation, much more masked in the USA, North Korea or Putin’s Russia, especially in terms of an unbridled arms race.

Artificial intelligence can be beneficial in a wide range of sectors, such as health care, energy consumption, vehicle safety, agriculture, climate change and financial risk management. It can also contribute to the detection of fraud and threats to cybersecurity, and enables law enforcement authorities to fight crime more effectively.

Artificial intelligence, intelligent robots: what future for human?

For most of us, our understanding of robots and artificial intelligence (AI) is drawn more from science fiction than…

medium.com

However, it raises new issues for the future of work and raises legal and ethical issues. But in the face of all these questions, which are beyond the reach of decision-makers, it was more than necessary in our democratic societies to define major strategic lines of action in order to put the States and the ordinary citizen-consumers on guard.

At this stage, the European Union has recently published a set of recommendations and rules to develop ethical and responsible artificial intelligence applications. This work carried out by a group of about fifty experts is quite agreed, recalling major ethical principles to guide this field.

As The Verge ironicizes: “ most of the proposals are a little abstract and remain based on vague and general principles.

Perhaps this irony came from a misunderstanding of the European legal system and clearly a more than obvious lack of certainty in the medium term.

Indeed, the more general the proposals are, the more they allow the legal system to adapt more precisely to cases that were unpredictable at the time when the principles/recommendations were adopted. This judicial syllogism is inapplicable in Anglo-Saxon or American judicial systems where everything must be planned, recorded in advance at the risk of falling more easily into a legal vacuum given the speed of technological change. But reality is much more versatile!

The German philosopher and ethicist, Thomas Metzinger , was one of the philosophers who participated in this commission. In the German newspaper Der Tagesspiegel, he publishes an article where he wonders whether he has not finally participated in the “washing” ambient ethics, i.e. whether the work he has done does not justify a form of ethics that is not really one!

From the outset, he concedes:

The result is a compromise of which I am not proud, but which is nevertheless the best in the world in this area”.

In fact, the United States or China, leaders in AI, have not published guidelines to guide AI, R&D and development.

These first principles, although imperfect, should certainly inspire the future production of legal compliance principles on these issues and certainly lead to the production of regulatory frameworks.

But we are not there yet!

For the time being, the recommendations are lukewarm, short-sighted and deliberately vague. They ignore long-term risks, hide difficult problems (such as “explainability”) through rhetoric, violate the basic principles of rationality and claim to know things that no one really knows.

It is a wonder to see this philosopher explain that in his working group, composed of 52 people, only a handful were specialists in Ethics (most of them were politicians, civil society, a few researchers and, above all, representatives of industry).

Do not be naïve, we know very well that in the future, the applications that will be put on the market (such as surgeon-robots, lawyer- robots or judge robots, to name but a few already present) are sometimes already known and kept secret by a small elite represented by researchers, engineers under the control of private companies largely financed for a rapid development of the AI.

To reveal, within this Ethics group, their corporate secret would have undoubtedly curbed their margins of manoeuvre… and far from them to pronounce on the inherent, and therefore ethical, risks of their research.

Finding compromises was therefore very difficult!

Thomas Metzinger explained that his mission was limited to defining red lines, i.e. “non-negotiable” ethical principles, areas where Europe should refuse to go. Of course, only burning issues such as autonomous lethal weapons or the State’s assessment of citizens (unshakeable subjects of Fundamental Rights / Human Rights) were addressed.

During the discussions on the drafting of the recommendations, the former president of Nokia, the friendly Pekka Ala-Pietilä , kindly asked him to delete the mention “non-negotiable”…

From discussion to discussion on the very form of the report, many industry representatives with a positive vision have vehemently insisted on deleting references to red lines. The final document only addresses critical concerns, diluted into a set of general principles.

Capitalism has overcame Ethics: for the problems will come back to us later as in the case of global warming !

For Thomas Metzinger, this is a very concrete example of “ethical laundering”:

Industry organizes and maintains ethical debates to save time — to distract the public and to prevent or at least delay the effectiveness of regulation and policy development. Politicians also like to set up ethics committees, because it gives them a plan of action when, given the complexity of the problems, they simply don’t know what to do — and it’s only human”.

In other words, politicians preserve their public image by setting up ethics committees without being aware of them, i.e. “knowing or expert” on the agenda. At the same time, the industry itself is building “ethical washing machines” to show that it cares about these issues.

Facebook has invested in an AI Ethics Institute funding an institution to train AI ethicists. Google has mixed it up in LYON, France very recently.

Google, after having pronounced some major principles, launched an ethics committee — which exploded in mid-flight.

For the philosopher, the risk that ghost committees, self-referential labels and conceptual ethical principles that are somewhat disconnected from operational realities will develop everywhere is real.

Yet, the philosopher notes, in the face of China or the United States, only Europe is in a position to “assume the burden” of putting in place the principles of responsibility.

Image by Pete Linforth from PixabayDGPM : EU’s regulation

Despite their limitations, these principles are currently the best we have to move forward. It is up to research and civil society to take control of the process and extract the debate from the hands of industry alone:

The window of opportunity within which we can at least partially control the future of the AI and effectively defend the philosophical and ethical foundations of European culture will close in a few years”, he warns, alarmist.

An article from Associated Press was also critical of the fashion for ethics in the field of AI and, finally, of the reluctance of large companies to adopt clear frameworks in this area.

Creating ethics committees without executives to make responsibility operational will not go very far” says Austrian researcher Ben Wagner, Director of the Privacy Lab at the University of Vienna.

For him, here again, these ethics committees are part of the “washing ethics”, a superficial effort that aims only to reassure the public and legislators alike and delay regulations.

Of course, the fact that many companies are studying the issue is interesting, but for the moment they all have latitude to decide which principles to integrate or not into their business decisions.

For the time being, employees of these large companies are the ones who have had the most power on these issues: it was internal criticism that challenged Google’s ethics committee, which led the company to cancel a monitoring contract signed with the Pentagon to analyze drone images.

Ben Wagner also wondered how to ensure that ethics is not misused. To this end, it underlines the need for regular external participation that involves all stakeholders.

He points out the importance for companies of:

  • establish an external and independent control mechanism,
  • ensure transparent decision-making processes that explain and motivate the decisions taken,
  • develop a stable list of norms, values and justified rights,
  • ensure that the ethical issue is not limited to or substitutes for a question of respect for fundamental or human rights,
  • clearly define the link between the commitments made and existing legal or regulatory frameworks, particularly where the two are in conflict.

For Wagner, when ethics is seen as an alternative to regulation or a substitute for fundamental rights, then ethics, law and technology suffer.

If we understand Wagner’s comments correctly, ethical provisions must not be limited to general principles, but must be transformed into concrete provisions to protect the values of society, as proposed by the DGMP.

For a better understanding:

Artificial Intelligence: The challenges up to the emergence of “Deep Learning”.

Artificial intelligence is a popular field that has a long history but has continued to grow and change. It’s a…

medium.com

For designer Molly Wright Steenson, at the Interaction Design conference , we can identify dozens of toolboxes, principles, codes of conduct, frameworks, manifests and ethical oaths…. But ethics is not just about applying the right roadmap to a product, or sticking principles to practices.

In her presentation, Molly Wright Steenson pointed to an article in Forbes magazine in 2006, which showed that the ethics officer’s fashion is not so new. But that in 2006 as in 2019, the principles and committees can do nothing if the issue is not addressed globally by the company.

The fundamental weakness of this explosion of ethics committees in technology companies, says AI specialist, Rumman Chowdhury :is their lack of transparency”.

Many research or health institutions have long established ethics committees that defend the public interest in relation to the institutions that have established them. In the case of large technology companies, however, the interests represented by these committees are not as clear.

What is the monitoring capacity of advisory ethics committees if they cannot make changes to the functioning of the structures they monitor or address the public?

Google only created its ethics charter after its employees objected to its contract with the Pentagon. IBM has also launched ethical initiatives, but this has not prevented the company from working with Philippine police forces on monitoring tools:

“The interest of companies in ethical algorithms so far has not stopped them from supporting deeply unethical causes”.

In the meantime, recalls the article in The Verge, the legislation still has some advantages. And existing legal mechanisms can be mobilized to challenge algorithmic decisions, as the US Department of Housing has just sued Facebook for discrimination because its advertising targeting system allows real estate advertisers to exclude people based on their skin color, religion or place of residence from their real estate ads.

A newsletter on algorythms from the Technology Reviewthe Algorithmic Accountability Act -adopted in 2019, allows:

  • to regulate algorithms by imposing an obligation on large companies to assess the bias of their systems,
  • by conferring on the Federal Trade Commission the power to bring proceedings leading to the conviction of discriminating companies in particular.

This Act aims to require large algorithmic companies to assess the biases in their systems, through audits or impact reports.

Other US senators have also proposed a specific law on facial recognition requiring explicit consent to the sharing of facial recognition data.

For Mutale Nkonde, a researcher at the Data & Society Research Institute, involved in the development of these regulatory processes, the Algorithmic Accountability Act is part of a broader strategy to establish regulatory oversight of all AI processes and products in the future.

It announces a future bill on the spread of misinformation and another to prohibit practices aimed at manipulating consumers into giving up their data.

The problem comes from the fact that these technologies are transversal: “Face recognition is used for a lot of different things, so it will be difficult to say: these are the rules applicable to facial recognition”.

For Mutale Nkonde, it is likely that this regulatory movement will lead to the technologies. However, these questions remain very poorly understood by all American elected officials.

Other initiatives are also underway. U.S. Senator Elizabeth Warren (@teamwarren), a Democratic nomination candidate for the 2020 presidential elections, published a call on Medium to dismantle the Gafams. Data driven investor publication added :

“In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave — rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation which has already come into effect in the European Union last month. But more robust regulation of Silicon Valley isn’t enough”.

Moreover, she has:

  • also introduced a bill, which would make platforms legally liable for any leakage of personal data (a problem that has become endemic).
  • proposed that the Gafams be taxed not on income declared to the tax authorities, but on income they present to their investors and shareholders.
  • To regulate automated systems, the ethics screen could therefore quickly fail in the face of more immediate promises of regulatory changes and legal proceedings.

In the United States, environmental impact assessment allows the public to comment. However, auditors cannot only be sworn experts, but also communities and representatives of public interests must be involved. Nor does the Algorithmic Accountability Act provide the right to impose a certain transparency on the results of impact assessments.

Without advocating full transparency of these audits, the experts believe that, at a minimum, the FTC, the evaluation authority, should produce an annual report on the lessons learned from the impact studies carried out!

How do issues become priorities on the innovation agenda? How, to address these problems, do standards emerge among many competing solutions?

In the application of these standards, what procedures are being put in place? And to avoid the heaviness, bias or blind spots of these procedures, what small or large arrangements are they invented on a daily basis?

These are all questions on which tomorrow’s society will depend, with machines, but above all with humans. For a great understanding in this field:


read original article at https://medium.com/emmanuelle-robin/artificial-intelligence-conflicts-of-interest-between-ethics-and-the-needs-of-an-adapted-d2c1512e0bc9?source=rss——artificial_intelligence-5