An Ethical Horizon for Artificial Intelligence (Part 1/4)

 

Self-governance leadership can improve the future of AI, if companies are brave enough to adopt ethical tools and new business model leadership now.

PART 1

Artificial Intelligence is mature enough for professional ethics, but legal and academic haggling could roll on for many years; as it has with privacy policy governance.  We are in a world quick to fund and produce weaponized artificial intelligence. Commercial AI is leading a quietly unchallenged data reign, relatively unfettered by ethical disciplines.  When examples of poor ethical behaviour involving AI abound, the consumer public can’t necessarily afford to wait for policy wonks to emerge with a brand of consensus.

A significant percentage of the US academic community enamored with AI will continue to enable power differentials actively harming human rights interests. If you leave subjective ethical preferences to academic AI developers exclusively, you may wait behind the political will of public grant funders.  If you leave ethics to the companies who use and market AI, you might invoke consumer or market preferences, take your business elsewhere and still feel the effects of encroachment.

The future would be bright if a business gets a hold of conscious capital principles. For example. the health food market started out rough. It improved every 2-3 years with better quality food sources, increasingly diverse options and adaptation to culinary trends. 30 years later they have managed to pose significant competition to conventional market offerings. Conventional grocers now adopt more health food due to consumer demand. Competition stemming from privacy limitations sharpens the understanding of what is and can be. If you want more privacy in the market, you will have to create it and the environment for it.

Privacy and security positioning shouldn’t take 30 years due to current levels of risk involved.  You also don’t have to wait long because social and technical innovations are already present in the marketplace now.  Smaller companies are in a great position to adopt a flexible level of UI, security and ethics principles from the ground floor.  Larger companies take much longer to retrain their offerings. Loyal consumers should continue to speak up for what they want and affirm the right direction for privacy and security options.

The good news is AI has reached a level of business and adoptive maturity to qualify demand for ethical balances and corporate restraint. Corporate self-governance frameworks can expedite ethics as a deliverable competitive offering to consumers now.  There are de-identification tools and ethics proposals on the table all over the modernizing world from thoughtful social innovators who want computing futures to succeed without harming consumers.

The span of concerns over harm are proportional to AI’s ubiquitous presence in the marketplace. Big Data (machine learning), the Internet of Things(IoT), and drone robotics are examples of AI innovation bearing conflict to human interest.  Social innovation can help manage need in key areas flagged for ethical safeguards like: bias, fair information practice and proprietary rights with accountability. 

I will examine each of these areas for social innovation in the coming days.

COMING NEXT..  An Ethical Horizon for Artificial Intelligence, Bias (Part2/4)