An Ethical Horizon for Artificial Intelligence: Bias (Part 2/4)

Bias has been afflicting societies since the dawn of civilization.  Machine learning innovations, like money and tools, have great potential for both good and for evil.  An introduction of ethics into the anthropology now of AI may train social intent for good altering the course of human history.

PART 2

 

Among plain issues facing legal ethics is the human incorporation of bias.  There is hard evidence confirmation biases of engineers producing the artificial intelligence present a design problem.  The bias can play out so homogeneously among engineers it is unnoticed. 

Today web code is not examined by lawyers for terms of bias. Since discrimination is a legally definable harm you may see the evolution of Beta Bias testers who bridge the ways machine learning is appropriated or misappropriated from its design stages.  From there, you just fix the code in a hasty way or do without the machining until it is appropriate for use.

A more loaded issue is when machined learned AI is deliberately focused for discrimination by administrative intent or governance. This is an age-old issue in Western and global societies who believe control of an undesirable class means necessary reserve of labour or human resources for the socially privileged class

One of the earliest recorded instance of American machine learning process of undesirables pre-dated the PC.  IBM issued punch cards sorting assigned codes per identity class types.  If you didn’t make the cut, the Nazi final solution team showed up at your door to collect you based on your code type.  God forbid a typo registered you as an undesirable when you were in fact greenlit to live a normal life in Hitler’s regime. You can imagine it would be a really tough day at Google if a government coerced their ad filters for ethnic cleansing.  Thankfully,  most companies using machine learning are pretty free to refuse business to democidal foreign governments.

Companies and data governance teams should actively recruit ethics professionals to innovate alongside their AI technologies before any government agency gets involved. Internal paternalism fosters more socially responsible companies.  Once external bodies, like governments, produce a mandate you might concede to their version of ethics.  AI professionals need to attend privacy and ethics committee meetings on social impacts the same way a parent would attend local PTA meetings.  AI makers can speak up for the intent of their produce and receive meaningful guidance to ward off forseeable consequences.

A prescriptive ethics treatment for problem biases would be to add periodic ethical and engineering audits to track value manipulation. Legal safeguards could include a private right of action to prosecute entities, government actors or agencies whose biased values persist under color of law or which further deliberate acts of sabotage by those using AI. 

The upshot is systemic forensics to discover discrimination-by-data or predatory targeting is simple as a computer system tracks. Criminal process will admit computing data as evidence.  Routine internal reports made public can move up ethics in governance priorities to avoid liability claims and PR disaster.  People would much rather see, "We found it and fixed it!" than "24 women were instantly deleted for consideration due to reproductive age and marital status".

In the US, value targeting can be sorted by any identifiable term. Predictive policing and even parole preferences have been found tipped out of hand when AI has been used.  So you would have to add an additional layer of accountability for banal claims like, “It was the machine, not me,” or “ I was just doing my job”.  

To succinct relief of most, technical audits can evaluate if it was the machining or the machinist who produced the value biases.  From there, it is graduates to public affairs issue; which bears a well rutted course to the complaint of most.

###

NEXT: An Ethical Horizon for Artificial Intelligence: Truly fair information practice (Part 3/4)