Microsoft and AI Ethics

Michael K. Spencer

Microsoft has something it calls AI principles. These are things like

  • Fairness: AI systems should treat all people fairly
  • Inclusiveness: AI systems should empower everyone and engage people
  • Reliability & Safety: AI systems should perform reliably and safely
  • Transparency: AI systems should be understandable
  • Privacy & Security: AI systems should be secure and respect privacy
  • Accountability: AI systems should have algorithmic accountability

Does this sound remotely like the internet and world we know?

These are great guidelines in theory. Microsoft has in recent times shown it takes ethical implications of AI so seriously that president Brad Smith, who met with Pope Francis (Reuters) in February of 2019 to discuss how to best create responsible systems, is reconsidering a proposal to add AI ethics to its formal list of product audits.

Then in March, 2019, Forbes writes that Microsoft executive vice president of AI and Research Harry Shum told the crowd at MIT Technology Review’s EmTech Digital Conference the company would someday add AI ethics reviews to a standard checklist of audits for products to be released.

Microsoft appears to be in talks internally where efforts are underway for an AI strategy that will influence operations companywide, in addition to the product stage.

Microsoft having an “A.I. ethics review” to the checklist of audits preceding the release of new products (that’s in addition to audits for privacy, security, and accessibility) sounds like a really good idea.

As MIT Tech Review notes, even Beijing now suddenly cares about AI ethics. In fact, China is perhaps a lot more organized about ethical reviews in its technology companies than America is in theirs, and that’s not a coincidence. The Chinese government is more organized when it comes to the future and their plan to be serious about AI innovation.

Google chose to scrap its own AI council that was controversial, after considerable backlash. Silicon Valley isn’t known for auditing their products or practices very well, and why should they, when there’s no regulatory or ethical body out there to enforce it? No Government body to answer to, no President and politicians who are not overly lobbied to.

Microsoft can play good cop and Google can play bad cop with its unethical implementations of machine learning tools, but society isn’t really listening to either. Technologists like me mostly know what’s going on, and negative stories are censored on Medium due to being violations of their curation guidelines. Which leaves us with the age old question, where is the ethics in media and AI engineering really at?

As Forbes goes on to mention, a roundup of AI ethics programs launched by Microsoft, Google, Amazon and Tesla shows a range of successes and failures over the last year that includes product overhauls designed to address biases and the rejection of research showing critical biases in AI architecture. Here again the PR and vision are not being implemented in practice, yet.

Silicon Valley can hint at ethics reviews in AI development, while the Department of Justice can hint at probing the likes of Facebook, Amazon — it’s all talk until regulating BigTech takes place, if indeed it ever will.

Anti-trust against Microsoft all those years ago and taking on duopolies in America today are different things. This is because BigTech like FAANG and Microsoft is the American firewall against a rising tech dynasty in China.

Sure, I believe in Fate. Microsoft, it turns out, has several internal working groups dedicated to AI Ethics, including Fairness, Accountability, Transparency and Ethics in AI (FATE), a group of nine researchers “working on collaborative research projects that address the need for transparency, accountability, and fairness in AI.”

But when you become as powerful as Microsoft, Facebook, Apple and Google are today, I don’t believe even their own employees know or understand the dangers of what AI going unregulated is becoming.

Microsoft also has an advisory board AI Ethics and Effects in Engineering and Research (Aether) which reports to senior leadership. I guess we’ll just leave it to them then, Fate and Aether. It’s too bad Microsoft and Apple are in many ways behind Google and Amazon in actual AI consumer products.

A number of A.I. researchers think that the technology industry needs to impose an ethical framework on artificial intelligence and machine-learning platforms. As if things like male bias could be hard coded out of software engineering products and machine learning practices.

Google also has all kinds of guidelines to not be evil. “Don’t be evil,” I think it read. Wonderful things like:

  • “Socially beneficial.”
  • Avoid creating or reinforcing unfair bias.
  • Be built and tested for safety.
  • Be accountable to people.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.
  • Be made available for users who accord with these principles.

Unfortunately the actual internet doesn’t exactly truly care about the ideals of tech companies. I’m not sure the executives or engineers at these companies actually care either about the long-term ethical implications of what they are contributing towards.

Follow a Futurist, sign up to receive blog-rolls about breaking news in Business and Technology & related Op-Eds.

read original article at——artificial_intelligence-5

Do NOT follow this link or you will be banned from the site!