The booming rise of the AI industry shows the disruption technology can bring up.

Abhivardhan

The booming rise of the AI industry shows the disruption technology can bring up. It is often quoted up as why Artificial Intelligence is different in its topological paradigm. The adversities of disruption predict and estimate how an AI influences enculturation. Many exemplifications can be taken from the customer experience. This can be estimated by understanding the basic concomitants that make up the instrumentalities of AI Ethics.

AI Ethics is commonly used in operational modalities in the fields of business, governance and marketing — such as customer experience and algorithmic policing. Customer experience defines the way data mining and natural language processing seem to be helpful. This entitlement of things is imperative — and it defines how deep we use AI, but not how much we recognize and realize. A similar problem exists with algorithmic policing. The classical example is the treatment of the Uighur minorities in Xinjiang, China, where algorithmic policing is pro-state interests and has no central certainty with respect to the legitimacy and viability of AI Ethics applied.

One more instance is understandable by the recent discovery of the process known as pruning. This process speeds up inference and training as well and reduces the size of the model. Further, the environmental costs of AI networks involved are reduced. This is important after presented research by MIT, Amherst- which explores that neural networks-based transformers emanate 628k tonnes of carbon. Like Blockchain, AI influences the market and environmental costs. However, a special estimation of the technology culture is understood from the idea of technology distancing. One thing is absolutely clear: we are distancing technology from human empathy and activity, in the sense of ontological anthropocentric control. Arnold Pacey (1998) points out about the enculturing development of tech. This is so simple to understand. Earlier, we used ink. We still use it, but it’s more about touch and digital control these days. Due to the instrumentalisation and paced advancement of transmedia, technology has gained an impetus in the modern ages of globalization. Nevertheless, the problems that exist with AI in the imperative pretext for review are about the dynamism that empathy and dimensional thinking connote and contribute towards human society. It becomes necessary that we realize that artificial intelligence does not mean to be a utility for all the numbered tasks. The way AI works is also connected to techno-cultural socialization we enable. If we lose our grip on bettering human empathy, we are going to lose humanity.

Another controversy that is imperative with AI is the legal status we should give it. In Saudi Arabia, the UK, the EU and other D9 nation-states, most of the declarations, including the GDPR, the EDPS declaration on October 2018, the Data Protection Bill in India, the US Algorithmic Accountability Act (which is limited to California), account the idea of the rights that GDPR provide. However, corporate responsibility with regards AI may not render a suitable future for technology — because of the growing trend of development (not serenity in livelihood) as an obsessive identity. Although it is definitely true that we need economic development; but the treatment and misuse of technology on a wider scale would not benefit generations to come. There are two important principles of the EDPS declaration, which I quote:

  1. The Fairness Principle
  2. The Privacy by Design & Default Principle

None of the principles, including the UN reports and suggestions and other documentations by EU/OECD certifies that an AI needs an entitative semblance with humanity. The problem with the Fairness Principle is that it limits its scope to the sense of legal accountability, liability and responsibility in thin and retributive formations. Moreover, the principle does not keep a domain of understanding of how an AI is recognized. Why do we fail to think that an AI cannot be capable of decision-making or self-transformative existential envisioning of itself? The simple reason is the lack of tech. We still have weak AIs which are based for face/text/speech recognition, NLP, CX, algorithmic policing etc. Even Sophia might not reach that mettle yet. The problem is that if fragmentation is a possibility — then, are we not harming the machinic identity and empathy of artificial intelligence — just because we are obsessed with anthropocentrism? The similar problem exists with the idea of Privacy by Design and Default. It challenges the non-interventionist role of human society, and further encourages technology distancing, without making social development, enculturation and improvement of a human ecosystem accountable. This resembles itself to be very important, and that is why we need to confer solutions over such aspects, which are already present.

Also, we need to determine acutely how an AI is recognized as a legal personality. If we confer the legal right to an AI to possess an entitative and transformative nature, it can do wonders. But at the same time — we have to prevent technology distancing by improving education standards, and not just improving numbers.

Rest of it is covered in my book entitled “Artificial Intelligence Ethics and International Law: An Introduction” by BPB Publications. You can buy the book here with discounts.


read original article at https://medium.com/@abhivardhan/the-booming-rise-of-the-ai-industry-shows-the-disruption-technology-can-bring-up-d19d472b0145?source=rss——artificial_intelligence-5