Theopolis Monk, Part V: Further Fertile Fields

For the closing section of this five-part series (previous sections being Part I, Part II, Part III, and Part IV), I have selected five areas of current conversation that I find to be particularly worth paying attention to. This section is not exhaustive or authoritative.

A. Bias

A popular conversation in recent years is the topic of biased machine learning models, such as those which associate negative connotations with certain races [1] or predict employability on the basis of gender [2], although such occurrences are nothing new to statisticians, and have been equally attributed to “Big Data” as much as to AI [3]. There are numerous conversations regarding how to “fix” bias [4] or at least detect, measure and mitigate it [5]. While these are important and worthy efforts, one can foresee that as long as there are bad statisticians – i.e., people doing sloppy statistics — there will be biased models. And machine learning (ML) automates bad statistics (though typically not through the algorithms involved but through the datasets used to train the models). Thus the problem of bias is both a current topic and one which is likely to remain relevant for some time to come.

B. Black Boxes vs. Transparency

In Part II, we mentioned requirements that algorithmic decisions should be “explainable,” [6] as opposed to “opaque” [7] systems which function as “black boxes” [8, 9]. Two main approaches present themselves:

  1. Probing Black Boxes. One approach is to use various methods to probe black box systems, by observing how they map inputs to outputs. Examples include learning the decision rules of systems in an explainable way (and even mimicking the existing system) [10] and extracting “rationales” [11] — short textual summaries of significant input data. A related approach involves mapping entire subsets at a time to predict the “boundaries” of possible outputs from a system, e.g. for safety prediction [12].
  2. Transparency As a Design Requirement. For several years, there have been calls to produce systems which are transparent by design [13]. Such considerations are essential for users to form accurate mental models of a system’s operation [14], which may be a key ingredient to fostering user trust [15]. Further, transparent systems are essential for government accountability and providing a greater sense of agency for citizens [7]. But how to actually design useful, transparent interfaces for robots [16, 17] and computer systems in general [18] remains an active area of research, both in terms of the designs themselves and in measuring their effects with human users — even when it comes to the education of data science professionals [19]. One cannot simply overwhelm the user with data. This is particularly challenging for neural network systems, where the mapping of high-dimensional data exceeds the visualization capacities of humans, and even on simple datasets such as MNIST, dimensionality-reduction methods such as t-SNE [20] and interactive visualizations [21] can still leave one lacking a sense of clarity. This is an active area of research, with two particularly active efforts by the group at the University of Bath (Rob Wortham, Andreas Theodorou, and Joanna Bryson) [16] and by Chris Olah [22]. It’s also worth mentioning the excellent video by Brett Victor on designing for understanding [23], although this is not particular to algorithmic decision making.

One “hybrid” form of the two above approaches involves providing counterfactual statements, such as in the example, “You were denied a loan because your annual income was £30,000. If your income had been £45,000, you would have been offered a loan” [24]. The second statement is a counterfactual, and while not offering full transparency or explainability, provides at least a modicum of guidance. This may be a minimal prescription for rather simple algorithms, although for complex systems with many inputs, such statements may be difficult to formulate.

C. AI Ethics Foundations

In reading contemporary literature on the topic of “AI Ethics,” one may not frequently see people stating explicitly where they’re coming from, in terms of the foundations of their ethics, and rather one often sees the “results,” i.e. the ethical directives built upon those foundations. Joanna Bryson, whom we’ve cited many times, is explicit about working from a framework of functionalism [25], which she applies to great effect, and reaches conclusions which are often in agreement with other traditions. Alternatively, philosopher Shannon Vallor (co-chair of this year’s AAAI/ACM conference on AI, Ethics and Society) in her book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting [26], advocates the application of virtue ethics to matters of technological development. Virtue ethics provides a motivation toward good behavior on the principle of “excellence” of character, leading to the greatest thriving of the individual and thus of society. Drawing from the ancient traditions of Aristotelianism, Confucianism, and Buddhism, and religious parallels in Christian and Islamic thought, and western philosophical treatises such as those of Immanuel Kant and the critiques by Nietzsche, Vallor develops an adaptive framework that eschews rule-based pronouncements in favor of “technomoral flexibility,” which she defines as “a reliable and skillful disposition to modulate action, belief, and feeling as called for by novel, unpredictable, frustrating, or unstable technosocial conditions.” In the Christian tradition, Brent Waters has written on moral philosophy “in the emerging technoculture,” [27] and while not addressing AI in particular, many of his critiques provide somewhat of a (to borrow some jargon from machine learning) “regularizing” influence enabling one to approach the hype of AI development in a calm and reflective manner.

D. Causal Calculus

If neural networks and their ilk are mere “correlation machines” [28] akin to polynomial regression [29], how can we go from correlation to inferring causality? Put differently, how can we go from “machine learning” to “predictive analytics”? [30] Turing Award winner Judea Pearl in his 2018 book The Book of Why[31] (aimed at a more popular audience than his more technical Causality [32]) offers a set of methods termed “causal calculus” defined over Bayesian Networks (a term coined by Pearl) [33]. This book has generated many favorable reviews from within the AI community and has been regarded as contributing an essential ingredient toward the development of more powerful, human-like AI [34]. In a 2018 report to the Association of Computing Machinery (ACM) [35], Pearl highlights seven tasks which are beyond the reach of typical statistical learning systems but have been satisfied using causal modeling. Many further applications by other researchers of this method are likely to appear in the near future.

E. Transformative AI

One does not need to have fully conscious, sentient AGI in order to have AI that can still have a severely disruptive and possibly dangerous impact on human life on a large scale. Such systems will likely exhibit forms of superintelligence [36] across multiple domains, in a manner not currently manifested in the world (i.e., not in the familiar forms of collective human action, or artifact-enhanced human cognition). Planning to mitigate risks associated with such outcomes comprises the field of AI Safety [37]. In late September 2018, the Future of Humanity Institute released a report by Allan Dafoe entitled AI Governance: A Research Agenda in which he “focuses on extreme risks from advanced AI” [38]. Dafoe distinguishes AI Governance from AI Safety by emphasizing that safety “focuses on the technical questions of how AI is built” whereas governance “focuses on the institutions and contexts in which AI is built and used.” In describing risks and making recommendations, Dafoe focuses on what he calls “transformative AI (TAI), understood as advanced AI that could lead to radical changes in welfare, wealth, or power.” Dafoe outlines an agenda for research which seems likely to be taken up by many interested researchers.

Summary

Starting in Part I with an optimistic view of a future utopia governed by AIs who make benevolent decisions in place of humans (with their tendency toward warfare and abuse of the environment), we have noted in Part II that AI systems are unlikely to represent the world or other concepts in ways which are intuitive or even explainable to humans. This carries a risk to basic civil liberties, and efforts to make such systems more explainable and transparent are actively being pursued. Even so, in Part III we saw that such systems will and simply do require human political activity in the form of implementation choices and auditing such as checking for bias, and thus humans will remain the decision-makers, as they should be. While the unlikelihood of the realization of a quasi-religious hope of future AI saviors may be disappointing to science fiction fans, it means, in the words of Christina Colclough, (Senior Policy Advisor, UNI Global Union), that we can avoid “technological determinism” and we can talk about and “agree on the kind of future we want” [39]. We have in Part IV seen that AI is a powerful tool for good and for evil, and yet it is not “neutral”: it prefers large amounts of data (which may involve privacy concerns), large computing resources and thus large energy consumption, and may favor unreflective “magical thinking” which empowers sloppy statistics and biased inferences. Drawing causal inferences from the correlations of machine learning is problematic, but work in the area of causal modeling may allow for much more powerful AI systems. These powerful systems may themselves become transformative existential threats and will require planning for safety and governance to ensure that such systems favor human thriving. The conception of what constitutes human thriving is an active area of discussion among scholars with diverse ideological and religious backgrounds, and is a fertile area for dialog between these groups, for the goal of fostering a harmonious human society.

Reality Changing Observations:

1. One sees conversations about “algorithms” in decision-making on various media; to what extent should these be reframed in terms of “training datasets”?

2. We mentioned counterfactual statements as a way of delivering some modicum of explainability. Certain theological traditions such as Molinism found it important to assert that God has knowledge of such statements. Do you think counterfactuals tell us anything important about reality?

3. How might a “distinctly Christian” approach to “AI ethics” differ from secular approaches?

Acknowledgements

This work was sponsored by a grant given by Bridging the Two Cultures of Science and the Humanities II, a project run by Scholarship and Christianity in Oxford (SCIO), the UK subsidiary of the Council for Christian Colleges and Universities, with funding by Templeton Religion Trust and The Blankemeyer Foundation.​​

References

[1] L. Matsakis, A. Thompson, and J. Koebler, “Google’s Sentiment Analyzer Thinks Being Gay Is Bad,” Motherboard, 25-Oct-2017.

[2] J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, 10-Oct-2018.

[3] C. O’Neil, Weapons of math destruction: how big data increases inequality and threatens democracy, First edition. New York: Crown, 2016.

[4] J. Bloomberg, “Bias Is AI’s Achilles Heel. Here’s How To Fix It,” Forbes, 13-Aug-2018.

[5] L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasserman, “Measuring and Mitigating Unintended Bias in Text Classification,” presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES), 2018.

[6] B. Casey, A. Farhangi, and R. Vogl, “Rethinking Explainable Machines: The GDPR’s ‘Right to Explanation’ Debate and the Rise of Algorithmic Audits in Enterprise,” Berkeley Technology Law Journal (submitted).

[7] A. Campolo, M. Sanfilippo, M. Whittaker, and K. Crawford, “AI Now 2017 Report,” AI Now Institute, 2017.

[8] “Understanding the ‘black box’ of artificial intelligence,” Sentient Technologies Holdings Limited, 10-Jan-2018. [Online]. Available: https://www.sentient.ai/blog/understanding-black-box-artificial-intelligence/. [Accessed: 15-Oct-2018].Understanding the ‘black box’ of artificial intelligenceArtificial intelligence (AI) is playing an increasingly influential role in the modern world, powering more of the technology that impacts people’s daily lives. For digital marketers, it allows for…www.sentient.ai

[9] F. Pasquale, The black box society: the secret algorithms that control money and information. Cambridge: Harvard University Press, 2015.

[10] R. Guidotti, A. Monreale, S. Ruggieri, D. Pedreschi, F. Turini, and F. Giannotti, “Local Rule-Based Explanations of Black Box Decision Systems,” arXiv:1805.10820 [cs], May 2018.

[11] T. Lei, R. Barzilay, and T. Jaakkola, “Rationalizing Neural Predictions,” arXiv preprint arXIv:1606.04155 [cs.CL], Jun. 2016.

[12] W. Xiang, H.-D. Tran, and T. T. Johnson, “Output Reachable Set Estimation and Verification for Multilayer Neural Networks,” IEEE transactions on neural networks and learning systems, no. 99, pp. 1–7, 2018.

[13] M. Boden et al., “Principles of Robotics,” The United Kingdom’s Engineering and Physical Sciences Research Council (EPSRC), 2011.

[14] K. Stubbs, P. J. Hinds, and D. Wettergreen, “Autonomy and common ground in human-robot interaction: A field study,” IEEE Intelligent Systems, vol. 22, no. 2, 2007.

[15] R. H. Wortham and A. Theodorou, “Robot transparency, trust and utility,” Connection Science, vol. 29, no. 3, pp. 242–248, 2017.

[16] R. H. Wortham, A. Theodorou, and J. J. Bryson, “What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent Systems,” presented at the Proceedings of the IJCAI Workshop on Ethics for Artificial Intelligence, 2016.

[17] R. H. Wortham, “Using Other Minds: Transparency as a Fundamental Design Consideration for Artificial Intelligent Systems,” Ph.D. Thesis, University of Bath, 2018.

[18] E. T. Mueller, “Transparent computers: Designing understandable intelligent systems,” Erik T. Mueller, San Bernardino, CA, 2016.

[19] B. Delibasic, M. Vukicevic, M. Jovanovic, and M. Suknovic, “White-Box or Black-Box Decision Tree Algorithms: Which to Use in Education?,” IEEE Transactions on Education, vol. 56, no. 3, pp. 287–291, Aug. 2013.

[20] L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of machine learning research, vol. 9, no. Nov, pp. 2579–2605, 2008.

[21] A. W. Harley, “An interactive node-link visualization of convolutional neural networks,” in International Symposium on Visual Computing, 2015, pp. 867–877.

[22] C. Olah, A. Mordvintsev, and L. Schubert, “Feature Visualization,” Distill, vol. 2, no. 11, p. e7, Nov. 2017.

[23] B. Victor, “Media for Thinking the Unthinkable” MIT Media Lab, April 4, 2013.

[24] S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Automated decisions and the GDPR,” arXiv:1711.00399, 2018.

[25] J. J. Bryson and P. P. Kime, “Just an artifact: Why machines are perceived as moral agents,” presented at the IJCAI proceedings-International joint conference on artificial intelligence, 2011, vol. 22, p. 1641.

[26] S. Vallor, Technology and the virtues: a philosophical guide to a future worth wanting. New York, NY: Oxford University Press, 2016.

[27] B. Waters, Christian moral theology in the emerging technoculture: from posthuman back to human. Farnham, Surrey ; Burlington: Ashgate, 2014.

[28] W. Geary, “If neural networks were called ‘correlation machines’ I bet there would be less confusion about their use and potential,” Twitter, 13-Jul-2018. .

[29] X. Cheng, B. Khomtchouk, N. Matloff, and P. Mohanty, “Polynomial Regression As an Alternative to Neural Nets,” arXiv:1806.06850 [cs, stat], Jun. 2018.

[30] S. Kumar, “The Differences Between Machine Learning And Predictive Analytics,” D!gitalist Magazine, 15-Mar-2018.

[31] J. Pearl and D. Mackenzie, The book of why: the new science of cause and effect, First edition. New York: Basic Books, 2018.

[32] J. Pearl, Causality: models, reasoning, and inference. Cambridge, U.K. ; New York: Cambridge University Press, 2000.

[33] “Bayesian network,” Wikipedia. 11-Oct-2018.

[34] K. Hartnett, “To Build Truly Intelligent Machines, Teach Them Cause and Effect,” Quanta Magazine, 15-May-2018.

[35] J. Pearl, “The Seven Tools of Causal Inference with Reflections on Machine Learning,” Technical Report R-481, Jul. 2018.

[36] N. Bostrom, Superintelligence: paths, dangers, strategies. Oxford, United Kingdom ; New York, NY: Oxford University Press, 2016.

[37] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané, “Concrete Problems in AI Safety,” arXiv:1606.06565 [cs], Jun. 2016.

[38] A. Dafoe, “AI Governance: A Research Agenda,” Future of Humanity Institute, University of Oxford, Oxford, UK, Aug. 2018.

[39] C. Colclough, “Putting people and planet first: ethical AI enacted,” presented at the Conference on AI: Intelligent machines, smart policies, Paris, 2017.

Recommended Posts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments