Introduction
In this article I will discuss the ethical and moral aspects of Artificial Neural Networks (ANN’s). I argue that, although these ANN’s are incredible and powerful tools, they require careful thought to mitigate the risk of the negative ethical consequences that easily and unexpectedly arise.
What is an Artificial Neural Network?
An artificial neural network (ANN) is a logical structure inspired by the behavior of biological neurons. There exist many variations of neural nets, but the general idea is to simulate the way that each biological neuron has an input (usually from other neurons) and an output (usually to other neurons) and fire a signal to the output when the input potential reaches some specific value [1][2].
For the purposes of analyzing the ethics surrounding ANN’s, they can be thought of as complex, effective tools for performing classification and regression on data of arbitrary complexity. They are what enable modern image/facial recognition, large-scale data mining for advertising, automated scan analysis, and more [3]. It is also important to note that they can only learn how to perform a task when given training data that is representative of that task.
Personal Background
My name is Aman Bhargava and I am a student of Engineering and Machine Intelligence. Having been around computers and electronics all my life, I hold great interest in artificially intelligent systems. I have also done many projects involving machine learning and ANN’s, and I founded a medical analytics company this year that relies heavily on Machine Intelligence, ANN’s, and other digital technologies [4]. I aim to leverage my experience with the technology to offer a comprehensive analysis while minimizing my natural bias.
Values of Artificial Neural Networks
ANN’s embody values of those who designed them. ****According to Postman’s paper Five things We Need to Know About Technological Change, technologies always have winners and losers, even if this is because the technology benefits some a little more than others [5]. A natural next conclusion is made by Winner in Do Artifacts Have Politics?, where it is argued that, because the benefits and detriments of a technology are not evenly distributed across socio-political groups, the technologies themselves have politics [6].
My world view plays a pivotal role in the selection and presentation of the points in this analysis. I have attempted to provide a balanced view of the technology by speaking to the benefits and the detriments of each value embodied by ANN’s via case studies and my personal experience with the algorithms themselves.
Utilitarianism
The way an ANN optimizes itself over the course if its training is simply by maximizing accuracy over a training set [2] which is a fundamentally utilitarian design [7]. Whether this involves making morally questionable correlations (e.g. racial profiling) is not considered by the ANN. This value is part of what makes ANN’s such powerful tools. Part of the beauty of ANN’s is the way they are able to make the qualitative into the quantitative [2]. However, it is easy to forget that they have no emotions or morals – they are just a series of mathematical computations that have no sense of right or wrong. The only thing they can ‘sense’ and react to is their accuracy on a training set.
Computer Power and Brain Power
ANN’s implicitly value computing power and mathematical knowledge [2]. ANN’s need significant computing power to train, especially for high-dimensionality tasks like image processing and voice recognition. Additionally, one requires at least a high-level understanding of some relatively complex concepts in order to even begin to think about employing a neural network. Although there exist services that enable one to train a network without coding, the optimization parameters are not particularly simple to understand. If one desires to delve into coding their own neural network for greater customization, there is significant mathematical understanding required to implement the linear algebra and multivariable calculus involved in neural networks [2].
Results over Interpretability
Interpretability is the ability for a human to understand the reasoning behind an algorithm making certain decisions [8]. For example, in logistic regression, one can look at the parameters and directly see how highly a given input is weighted [9]. Meanwhile, after much observation of extremely simple neural nets, one can sometimes discern a vague meaning for the weights assigned to different connections [10]. However, as complexity increases, the interpretability of neural nets becomes intractable (though it does make some really cool art when we take a peek inside [11]). The structure of ANN’s implicitly values results over interpretability due to its complex, at times convoluted structure that is again part of what makes them effective tools.
Direct Ethical Consequences
Utilitarianism
When taken to a quantitative extreme as it is in ANN’s, utilitarianism has some massive benefits and drawbacks depending on the situation. For example, when trying to perform a beneficent task (e.g. diagnosing patients, safely directing traffic, etc.) this statistical utilitarianism has great ethical utility. The algorithm can be extremely effective by finding complex and/or unintuitive patterns that humans (and other algorithms) are unable to.
In social situations, however, it is easy for minority voices to be crushed by utilitarian philosophy. Take the example of an ANN trained to evaluate whether or not a bank should give loans to a given individual: if race happens to be the boundary condition that the neural network finds, it will make racist judgments [12]. This type of problem is known as under-fitting, where the algorithm essentially makes an overly simplistic boundary condition [2].
Although the utilitarian philosophy upon which neural nets are based is a large part of what makes them able to perform well on computational tasks, it is important that those who employ the algorithm keep in mind that behavior of the algorithm and take steps to ensure that it does not do harm.
Computer and Brain Power
Since ANN’s require significant computing power to run at a large scale (particularly with high-dimensionality data), this massively benefits those who are in a position (financially, geographically, politically) to access such computing power. More computing power begets more computing power. This widens the divide between the ‘haves’ and the ‘have nots’ of computing power [6].
The ones who have the math and computer science background are the definite winners in this situation, if only because they hold somewhat more control over the way in which these algorithms are used. They are also able to leverage their knowledge and ability to implement ANN’s for their own direct gain –– for example, I would not have been able to get the jobs I received or start my company if I did not have the privilege of said background.
Therefore, it is of great ethical utility to democratize ANN’s and other forms of ML/AI. For instance, at TOHacks, my team created Democracy.AI – a platform that enables people to access and use algorithms from researchers with a simple, online user interface [13].
Results over Interpretability
The inability to directly understand the line of reasoning ANN’s follow makes it easy and inviting to use them as absolute judges of the world [14]. It invites users of the technology to shirk responsibility and adopt Aristotle’s vice of cowardice [15].
After all, ANN’s are meant to be emotionless and effective pattern finders that drive towards the perfection of their performance on any given task. But, this can result in systemic injustice as in the aforementioned examples. Shirking of responsibility is a socially constructed consequence because it is still humans who make the decision. It is easy to forget that moral boundaries cannot be explicitly programmed into conventional ANN’s. If an algorithm’s goal is to get more clicks on advertisements, you can’t directly specify that the algorithm should not capitalize on or even exacerbate a target demographic’s potential mental illnesses, for example.
Conclusion
Clearly, the ethics of Machine Intelligence (in this case ANN’s) are difficult to infer or predict in every case. If one thing has been made clear by this investigation, it is that deep consideration is required in the way that we employ these algorithms. As again, ANN’s are powerful tools that can easily be misused even on the path to a righteous goal. AI credit evaluation systems meant to make loans more accessible can do the opposite when not trained well. Targeted advertising systems meant to meet more of the consumer’s needs can do so at the expense of their wellbeing. As Machine Intelligence is further integrated into all of our lives and industry, it is important that we all take the time to think about how we should be leveraging (or, at times, not leveraging) this powerful tool.
References
[1] H. Lodish, A. Berk, and S. L. Zipursky, “Overview of Neuron Structure and Function,” Molecular Cell Biology, no. 4, 2000.
[2] A. Ng, Class Lecture, Topic: “Neural Networks” Machine Learning, Coursera, 2017.
[3] C. Butticè, “5 Neural Network Use Cases That Will Help You Understand the Technology Better,” Technopedia, 2018.
[4] A. Bhargava and E. Scott, “CareTrack”, CareTrack.io, 2019 [Online]. Available: http://www.caretrack.io/. [Accessed: 19-Oct-2019]
[5] N. Postman, “Five Things We Need to Know About Technological Change,” 28-Mar-2019. [Accessed: 19-Oct-2019]
[6] Winner, Langdon. “Do Artifacts Have Politics?” JSTOR, Vol. 109, no. 1, 1980, pp. 121–136. Available: www.jstor.org/stable/20024652. [Accessed 19-Oct-2019]
[7] B. Duignan and H. R. West, “Utilitarianism,” Encyclopædia Britannica. [Online]. Available: https://www.britannica.com/topic/utilitarianism-philosophy. [Accessed: 19-Oct-2019].
[8] C. Molnar, “Interpretable Machine Learning,” Christoph Molnar, 18-Sep-2019. [Online]. Available: https://christophm.github.io/interpretable-ml-book/. [Accessed: 19-Oct-2019].
[9] A. Ng, Class Lecture, Topic: “Linear Regression” Machine Learning, Coursera, 2017.
[10] C. Molnar, “Interpretable Machine Learning,” 7.1 Learned Features, 18-Sep-2019. [Online]. Available: https://christophm.github.io/interpretable-ml-book/cnn-features.html. [Accessed: 19-Oct-2019].
[11] “Inceptionism: Going Deeper into Neural Networks,” Google AI Blog, 17-Jun-2015. [Online]. Available: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html. [Accessed: 19-Oct-2019].
[12] M. Lefkowitz, “Study: AI may mask racial disparities in credit, lending,” Cornell Chronicle, 29-Jan-2019. [Online]. Available: https://news.cornell.edu/stories/2019/01/study-ai-may-mask-racial-disparities-credit-lending. [Accessed: 19-Oct-2019].
[13] A. Bhargava, A. Carnaffan, and J. Galiyini, “Democracy.AI,” GitHub: Amanb2000, 23-Jun-2019. [Online]. Available: https://github.com/amanb2000/Democracy.AI. [Accessed: 19-Oct-2019].
[14] World Economic Forum Global Future Council on Human Rights, “World Economic Forum,” World Economic Forum, World Economic Forum, 2018.
[15] J. K. Thomson, “ARISTOTLE’S ETHICS TABLE OF VIRTUES AND VICES,” Aristotle’s Virtues and Vices. [Online]. Available: https://www.cwu.edu/~warren/Unit1/aristotles_virtues_and_vices.htm. [Accessed: 19-Oct-2019].