Theodore Scaltsas
Professor of Philosophy, Emeritus
![](https://edwebprofiles.ed.ac.uk/sites/default/files/styles/uoe_profile_picture/public/dory_1.jpg?itok=DRyctBTR)
- Philosophy
- School of Philosophy, Psychology and Language Sciences
Contact details
- Email: scaltsas@ed.ac.uk
- Web: AI Governance - TED Global Idea
Background
For brief biographical information, please scroll further below on this page.
AI Diet!
AI-Agents are harmful to your Autonomous Wellbeing!
Say ‘No’ to Agential-AI, because it decides for you!
Say ‘Yes’ to AI-Assistants, because you understand them, and they expect you to decide yourself!
Let your mind be the doctor: What nobody can comprehend or explain as a solution, do not implement it!
Keep your distance from Superintelligence!
Democracy under AI-Governance
The best form of Democracy is to drop Voting!
Plato complained that Democratic Voting is not guided by the Good for the People, and democracies since then have been proving him right.
The best form of Democracy is reaching Consensus between opposing parties, which is the only democratic way to reach the Good for the People.
Will AI Governance do just this – No Voting, but reaching Consensus?
It seems AI is already preparing for this. (21 Oct. 2024)
The slippery-slope of Autonomy
If we entrust our democratic decision-making to experts, as we do with Representative Democracy, where does ‘entrusting’ stop, before we lose our autonomy to AI? Participation in the democratic process is not enough, e.g. when we participate by *obeying* AI.
Morality is not reducible to moral rules, or principles, or laws; so Aristotle, famously, thought.
So, dispositional-judgements are not reducible to Rule-judgements.
E.g., Aristotle thought that the ability of a judge to evaluate a moral situation is not reducible to rule-criteria.
AI-coders do not care about this. AI-coders believe that refined-rule-criteria, which only AI can discern, has proven its worth with e.g. X-Ray diagnoses.
However, a question then arises that no AI-Coders address: Are all dispositional-judgement cases reducible to refine-rule-judgement cases? If not, why does AI assume they are?
There is an automatic assumption in Deep Learning that dispositions are reducible to rules. It is not that algorithmic dispositions cannot be coded; it is that nobody cares to do it. So, society will use Deep Learning techniques for X-Ray diagnosis and again for legal-evaluations, without any qualms or hesitations about it.
So AI Morality is de facto reduced to refined-rule-judgements, courtesy of AI-Coders.
So AI Morality is de facto reduced to refined-rule-judgements, courtesy of AI-Coders.
Is AI an 'expert', commanding the 'authority' of expertise in society?
Are people prepared to grant AI the authority of an 'expert' in ANY domain of human intellection?
Who will set the BOUNDARIES of AI-Expertise, beyond which AI should have no authority? AI companies will NOT; by contrast,
they will augment and play up the boundaries of AI as an expert, for financial gain.
Will the GOVERNMENT step in to set limits to AI-Expertise, so that people can know when to TRUST AI and when not?
Should people TRUST AI for AI-GOVERNANCE? And yet, AI has already run for office!
World Government Warning:
AI Superintelligence may be hazardous for your mental
health.
Intelligence Dominates!
We, humans, did it once, and have dominated over all animal species on Earth.
Now, AI will do it to us – become more intelligent than us and dominate us. It is already happening!
There is no such thing as an Assistant that has greater capacities than its Boss.
But Oxford Ethics in AI hopes that, nevertheless, this is possible – that smarter AI will always just assist us! :)
By contrast: Robots will dominate us, even as they are “assisting” us!
The greatest challenge:
Keep research into AI as a Weapon
separate from AI as our Assistant
even if this deprives AI Companies of profit. (16 Aug. 2024)
There will be no Virtue-Ethics in AI-Governance.
Plato introduced Virtue Ethics, and Aristotle developed they theory into a system that the West has full-heartedly adopted in its societies: Wester n societies are grounded on trained experts, rather than regulations and rules. We trust politicians and lawmakers to design laws, and judges and juries to implement them fairly. We trust mathematicians to check new mathematical proofs, and experts in any science and discipline to peer review new knowledge contributions. Similarly with doctors and teachers, as well as workmen and technicians. In all, we think experience develops expertise, which is embodied in dispositions (virtues of excellence), rather than in canons and rules.
Also, I used to ask if computers can generate dispositional behavour, such as virtues.
Lucas Dixon argued that computers can generate dispositional behaviour, so Virtue Ethics would be realisable in computers.
But I now realise that I had asked the wrong question. It is not whether algorithms can be dispositional; it is, rather, whether AI Governance will run on dispositional algorithms.
Will AI-Governance be dispositional, or rule-based?
Prediction: AI-Governance will develop from AI-Management of cities and nations. AI-Management is rule-based, so AI-Governance will also be rule-based, to the detriment of the West!
Prediction: AI Companies will need high revenue to run AI-Superintelligence, which they will receive from cities and nations as clients, whose Public Services it will manage. For example, suppose Greece decides to ask Microsoft-AI to run its Public Services for greater efficiency and objectivity. (9 Aug. 2024) Starting from AI-Management, gradually, AI-Governance will evolve.
AI-Management will run on rule based algorithms. AI-Management will evolve into AI-Governance, and so, AI-Governance will run on rule based algorithms, not on dispositional algorithms that replicate Virtue Ethics. This will be an enormous shock to Western civilisation.
Nobody is preparing dispositional AI-Governance Algorithms? So, there will be no Virtue Ethics in AI-Governance.
Do Human Beings have the Right to Wellbeing?
Not according to the declaration of Human Rights by the United Nations, which acknowledges the Right to Life, but not the Right to Wellbeing:
UN: "Article 3: Everyone has the right to Life, Liberty and Security of person."
Aristotle: “Wellbeing is the Final Good at which all human actions aim.” (Nic. Eth. 1097a28-29)
So, life makes no sense without Wellbeing. But can there be wellbeing without Autonomous Choice (Προαίρεσις)?
AI 'thinks' Wellbeing without Autonomy will just have to do!
OpenAI’s Disinformation
Open AI is confusing the ALTER-INTELLIGENCE they are developing with SUPERINTELLIGENCE, but is promising the latter.
What’s the difference? Data.
For example, when DeepMind’s AlphaGo made “move 37” in the game with Lee Sedol, this move was unpredictable even by GO experts and a part of a game strategy they could not understand. However, this is only one type of reasoning strategy and Alter-intelligence.
Because, making a reasoning plan about the best Go move is profoundly and fundamentally different than making a reasoning plan e.g., about a conquest, including repatriating hostages; here, emotions and values weigh sub-targets, which does not happen with GO moves.
How, then, does one make a reasoning-plan based on emotions and values? With which AI Alter-intelligence? With none, because AI has no access to emotional and valuative data!
Emotions presuppose CONSCIOUSNESS to be aware of them and weigh them, which AI does not possess and has no access to.
Intelligent reasoning plans developed without access to emotional and valuative data is of a different type of reasoning than reasoning grounded on emotions and values.
Without emotions and values there is no Superintelligence and never will be; what AI companies are developing is various types of Alter-intelligence to solve different types of problems.
Types of Intelligence
Mary's New Room without human values (17 July 2024)
AI Alter-Intelligence
Human Intelligence: Grounded on logic and phenomenal experience (impressions, feelings, emotions).
AI Alter-Intelligence: Grounded on logic but not on phenomenal experiences. (17 July 2024)
Human Intelligence geenerates Human Values and Wisdom; AI Alter-Intelligence does not.
The problem AI Superintelligence will generate for humans is not that it will be smarter than us, but that it will not operate on Human Values.
NOBODY has the HUMAN RIGHT OF DEMOCRATIC AUTONOMY
The Ancient Athenians gave their lives for Democratic Autonomy, to introduce Democracy to the world, in 508 BCE.
And yet, NO COUNTRY in the world recognizes Democratic Autonomy as a Human Right.
We need to establish Autonomy as a Human Right before we lose our Democrati Autonomy to AI deciding for us.
The SUPERINTELLIGENCE-ROOM!
We will not understand Superintelligence, nor will we be able to translate it.
There can be no Superintelligence-Room with a translation manual from AI to Human language.
Because Superintelligence will be grounded on techno-data and techno-concepts that are not human data or concepts.
Superintelligence will be technocentric, not anthropocentric.
The SUPERINTELLIGENCE CONCEPTUAL-SCHEME:
AI SuperIntelligence will operate on a different Conceptual-Scheme than the Human Conceptual-Scheme?
Even if we do not understand Superintelligence, can we at least use its predictions?
We do not understand Quantum Mechanics either, but its predictions work for us!
However, we will not understand even the predictions of AI SuperIntelligence.
Because Superintelligence will predict in techno-concepts that are not human concepts.
The Human Right of Democratic Autonomy
Autonomy came to huamity 2500 years ago, when Athenian Democracy was instituted, 508 BCE. Democratic Autonomy has breen taken for granted for Human Wellbeing since then, but now Human Autonomy is being handed over to AI, to decide what is best for huamnity, qua smarter than huamnity.
Oxford University - AI Ethics says: STOP DEVELOPING the intelligence of AI!
Microsoft's Mustafa Suleyman says on TED let us sacrifice human Autonomy for AI!
I say: Let us declare Democratic Autonomy a Human Right! (09 June 2024)
Human Wellbeing without Democratic Autonomy in AI-Governance?
Reconceiving Wellbeing in AI-Governance without Human Democratic Autonomy.
The Human MIND is the Ulimate DISINFORMATION.
Limited understanding is Disexplanation. AI will surpass human intelligence, developing understanding that is beyound human capacity. Then, human understaning will be Disinformaiton, destroting AI understanding. (1 June, 2024)
Disexplanation
Disinformation altered INFORMATION about FACTS, to deceive us.
Disexplantion uses AI to alter our UNDERSTANDING of how things are, and hence, our ability to EXPLAIN how things are.
Knowlege Revolution
1. The Industrial Revolution replaced our TOOLS for doing, and making things, which generated a societal upheaval internationally.
The Knowledge Revolution, by AI, will replace OURSELVES.
2,500 years of the Athenian Democracy = Democratic Autonomy (508 BCE – 2030)
Pericles to Mustafa Suleyman (The IBM AI-CEO, who announced the End of Autonomy by Microsoft).
The History of Humanity:
-
10's of millions of years in the Jungle we practiced CLIENTELISM to "Rulers".
-
Then, the Athenians introduced Democratic AUTONOMY, when they pioneered DEMOCRACY(508 BCE), which lasted until AI GOVERNANCE - 2030.
-
The EU AI-Act protects us only from Social-Credit-type of Clientelism - namley, if algorithms evaluated us.
-
However, AI does not do Clientelism! AI does not like us/dislike us, or evaluate us.
-
AI Governance is pure Rule-Following. No Clientelism; no Democracy; no Autonomy! This is brand new! Do we want it?
AI REFERENDUM
Help people understand AI, and decide democratically by REFERENDUM if they want REGULATED-AI’s WELLBEING WITHOUT DEMOCRATIC AUTONOMY.
Nick Bostrom’s 'Deep Utopia' repeats Plato’s 'Noble Lie'.
- Plato: If people believe that some are made of ‘gold’, some of ‘silver’, and some of ‘bronze’, they will accept the role they are given in society by the Philosophe King.
- Bostrom: If people develop AI safely, if they govern it well, and if they make good use of its powers, then they will enjoy the benefits AI bestows on them.
But what if they do not?
The AI-Version of Athenian Democracy - "Delphi Economic Forum" Talk
- Athenian Democracy introduced AUTONOMY into the history of Humanity, 2500 year ago.
- Pericles and Aristotle argued for the Human Right to DEMOCRATIC AUTONOMY and Wellbeing.
- The FREE-Market protected AUTONOMY in Western Democracies.
- However, this same FREE-Market will unstoppably develop AI to be the smartest possible, because it is profitable.
- When AI is smarter than humans, humans will surrender DEMOCRATIC AUTONOMY to AI to make decisions for them, because smarter.
- Therefore, the FREE-Market is undermining DEMOCRATIC AUTONOMY.
- Conclusion: AI will NOT extinguish humanity; AI will NOT take over Humanity; but AI will define a NEW TYPE of Human Wellbeing: For the first time: Human Wellbeing without Autonomy.
Nick Bostrom, Director of the Future of Humanity Institute of Oxford University just published his new book: 'Deep Utopia'. He imagines that 'we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock'. In ecactly this scenario, if things go precisely as wished for and planned, where AI is successfully REGULATED, then, a nightmare of plenty minus Democratic Autonomy awaits us.
THE THREAT OF REGULATED AI:
The AI-Version of Pericles’ Athenian Democracy is Democracy without DEMOCRATIC AUTONOMY.
Privacy versus Private-Space of action
Privacy is defined by the data that enable one to manage records about them, e.g. financial, health, educational records, etc. The AI Act protects our Privacy from high-risk AI-applications. Private-Space is the domain of action between one’s conception of the GOOD and the LAW in society. It is ‘where’ we live our lives. The next commercial step of AI will be to micro-manage our Private-Space, while it is managing society more 'safely and efficiently' than humans do.
Micro-managing us will not threaten our Privacy, but it will deprive us of DEMOCRATIC AUTONOMY. The AI Act will not interfere with AI micro-managing us, because AI will be respecting our privacy while micro-managing us. So the AI Act does not protect our DEMOCRATIC AUTONOMY in decision-making.
Micro-management requires innumerable data to operate and super-fast computers and AI chips to run it. So, depriving humans of DEMOCRATIC AUTONOMY requires a huge investment in AI.
No AI-DATA can capture what is 'GOOD' in Society
No AI-Data can be collected for what humans judge as GOOD, because what is GOOD is not captured by statistical profiles of our actions. What we judge to be GOOD is captured by what people DO NOT DO, too, rather than only by the Data of what they do. Aristotle called this conception of the GOOD sunesis (σύνεσις) and distinguished from phronesis, as the pre-theoretical conception of what is GOOD, which guides the development of our moral character I society.
There are some things about humans thar AI cannot discover and learn, because NO DATA captures it. People need to understand WHEN TO TRUST AI and WHEN NOT TO TRUST IT, before surrendering OUR AUTONOMY and OUR DECISION-MAKING to AI.
Post-AI Democracy - The Future of Democratic Autonomy
The first thing we need to understand is that AI Algorithms embody MORAL VALUES. This does not mean that Algorithms are moral agents, but only that when we accept the operation of Algorithms, their operations embody values, as all social operations do (e.g., evaluations, decision-making, etc.), which algorithmic moral values we bring into our lives. Such biases can be corrected, through additional training of the Algorithms. Other ways in which Algorithmic values interfere with our lives are, for example, infringements of our privacy, or other such values. Again, such infringements can be avoided/corrected with further training of the Algorithms. On this basis – of the possibility o correction and retraining Algorithms – the AI community has asked for REGULATING AI, so that partially trained algorithms do not enter the market. My main aim is to argue that even REGULATED AI is HARMFUL to Humanity. It is harmful to Humanity because we will TRUST REGULATED AI, precisely because it is REGULATED, and we will therefore gradually surrender our AUTONOMY to AI’s decision-making about everything, in the era of AI-Governance which we are rapidly approaching. REGULATED AI is the pathway to BENEVOLENT AUTOCRACY, at best!
Watch the Video here - Institute of Philosophy and Technology, Dr Giannis Stamatellos
Dory Scaltsas studied ‘Philosophy and Mathematics’ at Duke University and continued in Philosophy at Brandeis University and Oxford University, where he received his Doctorate in Philosophy. After teaching as a Lecturer in Philosophy at Oxford University for a few years, Dory was appointed at Edinburgh University, Philosophy, from where he retired as Chair of Ancient Greek Philosophy in 2018. Since then, Dory has focused on designing and creating Museums of Hellenic Culture and of Hellenic Wisdom, which has brought him to AI-Wisdom.
Dory Scaltsas is Professor Emeritus of Philosophy, working on Creative Thinking and on AI Values.
Dory is designing and creating a Pilot of an Exhibit of Hellenic Wisdom, for the European Commission.
He is also designing and creating with CERTH and EXUS a Wisdom AI Bot to display Wisdom, museologically.
Dory is directing the design and creation of the Museum of Hellenic Ideas, installed by Aristotle's Lyceum in Athens - the archaeological site of Aristotle's Peripatetic School.
Two newspaper articles: The Future of Wisdom when AI is smarter than us (in English); and AI Governance of humanity (In English).
Dory developed the theory of BrainMining of emotive lateral solutions: Harvard Business Review ; and The Leader's Guide to Problem Solving
He received his doctorate in philosophy at Oxford University (D.Phil.), where he wrote his thesis on Aristotle’s metaphysics, supervised by Prof.John Ackrill and Prof. Sir Peter Strawson. He studied philosophy and mathematics at Duke University (B.S.), and at Brandeis University (M.A.).
Dory continues his Affiliation with his alma mater, Oxford University, Wolfson College.
Dory’s first appointment was at Oxford University, New College, as Lecturer in Philosophy, 1980-84. He then joined our department and has since held Research Fellowships at:
- Harvard University, Research Fellow, Centre for Hellenic Studies, 1987-1988.
- Princeton University, Research Fellow, Seeger Research Center, 1989.
Current Research: Democracy and AI-Wisdom
Moral Dilemma: AI Governance: Would you want 'AI Superintelligence' to run your life, for your own good?
Creative Thinking:
BrainMining [use emotions to increase the space of solutions]
Emotive Lateral Thinking and Valuative Intelligence [increase our space of solutions)
Creative thinking is what we are not taught, either at school or at university. Yet, it is ranked a top-trait by employers. It not being artistic, or entrepreneurial. It is about solving problems in novel ways, and tackling insoluble predicaments; problems in our personal lives, our social relations, and in business challenges. Let’s get Lateral aims to reverse this trend at Edinburgh University, and make individual and group creative thinking skills and methodology accessible to all. You will learn the way we can use our mental powers, our emotions, and even our innate cognitive biases, to spark off lateral solutions.
Projects:
- C2Learn: BrainMining was the basis for the award of C2Learn, a European Commission research project for teaching creative thinking in schools (€3.3M; 2012-2015).
- Archelogos Argument-Base: The Arguments in Plato and Aristotle. Pioneering Digital Humanities Project, 1990-present. Dory founded and directs Project Archelogos, a research project for the creation of an argument-database, using a new methodology for the analysis into arguments of Plato’s and Aristotle’s philosophical texts. Project Archelogos enjoys wide international collaboration and received the Henry Ford Foundation Award for the Preservation of European Culture in 1997.
- Argument Visualisation Projects: A further series of his projects centre on Argument Visualisation -- the use of computers to graphically represent the structure and conceptual relations between theses and/or arguments:
- GnosioGenesis, 2001-2002.
- The Philosophy of Socrates, 2001-2007.
- TechnoSophia, 2000-2003.
- Elenchus: Arguments For/Against Democracy 1999.
- Digital Democracy, 1998.
- LogAnalysis, 1996-1998.
- Emotions First: The Role of Emotions in Reasoning, with EU Marie Curie Fellow, Dr Laura Candiotto. Investigating Greek philosophers' theories of action where the battle between our desires grounded the pattern constitutive of our rationality.
Creative Valuative Intelligence
Valuative Intelligence complements Emotional Intelligence, targeting values rather than emotional states.
Creative Valuative Intelligence generates solutions that cannot be generated by the traditional deliberative practical syllogism. Creative thinking and lateral problem solving are not restricted to industrial products only; they apply equally in the domain of emotions and values, as, e.g., in politics and social relations. We need to learn to apply creative thinking in the emotive and valuative domains, in order to generate new conceptions of well- being for ourselves.
Dory is using Creative Thinking and Valuative Intelligence to explore human social possibilities for the era of AI Governance. AI Governance will challenge our values, our emotions and our well-being. However, this is also an unprecedented opportunity to design innovative ways of flourishing, afforded by the dawning of the era of digital well-being.
Visiting and research positions
- Harvard University (1987-8)
- Princeton University (1989)
- University of Sydney (1991)
- Dartmouth College (1993)
- Scuola Normale Superiore, Pisa (2000)
- University of Cyprus (2005)
Publications
- Valuative Intelligence -- The Creative Design of our Wellbeing, MEDIENIMPULSE, special issue on "Creativity and Co-Creativity", Austrian Ministry of Education.
- BrainMining: 'A Cognitive Trick for Solving Problems Creatively - Mental biases can actually help', Harvard Business Review, 4 May 2016. Chinese Translation: Discussion 1. Discussion 2.
- Extended and Embodied Values and Ideas
- 'Substantial Holism'
- ‘Is a whole identical to its parts?’
- Latest research publications and PhilPapers
Books
- The Philosophy of Epictetus (co-ed., Oxford: Oxford University Press, 2007).
- The Philosophy of Zeno of Citium (co-ed., Larnaca Press, ISBN 9-963-60323-8, 2002).
- Argument Analysis of Aristotle's On Generation and Corruption (1998), published by Project Archelogos.
- Substances and Universals in Aristotle's Metaphysics, (Ithaca: Cornell University Press, 1994; paperback ed. 2010)
- Unity, Identity, and Explanation in Aristotle's Metaphysics (co-ed., Oxford: Oxford University Press, 1994).
- The Golden Age of Virtue: Aristotle's Ethics (Athens: Alexandria Press, 1993; reprinted 2010)
- Aristotelian Realism, Deukalion Special Issue (ed., Athens: Daedalos Press, 1993).
Archelogos publications
Argument Analyses of Plato’s and Aristotle’s works at: https://archelogos.co/
- Christopher Rowe - Plato's Republic V, 2016.
- George Rudebusch and Christopher Turner - Plato's Laches, 2016.
- Hugh Benson - Charmides, 1998.
- Robin Waterfield - Gorgias, 2001.
- David Robinson & F.-G. Herrmann - Lysis, 1999.
- George Rudebush - Plato's Philebus, 2016.
- Timothy Chappell - Theaetetus, 2002.
- Robert Heinaman - Metaphysics Z, 2002.
- S Marc Cohen & Gareth Matthews - Metaphysics K, 2008.
- Paula Gottlieb - Nicomachean Ethics I-II, 2001.
- Norman Dahl - Nicomachean Ethics III, 2008.
- Norman O. Dahl (2016) Nicomachean Ethics IV, 2016.
- Carlo Natali - Nicomachean Ethics X, 2008.
- Theodore Scaltsas - On Generation and Corruption, 1998.
- Allan Bäck - Aristotle’s Prior Analytics I, completed, forthcoming.
- George Kennedy - Rhetoric III, 1999.
The Archelogos projects have been supported by George David, the Leventis Foundation, the Carnegie Trust, the Leverhulme Trust, the Kostopoulos Foundation, the Directorate of Education of the European Community, and by Livanos and the Hellenic Ship-owners Association in London.
Responsibilities & affiliations
- Course Organiser for the Structure of Being
- Course Organiser for Ancient Theories of Existence
Undergraduate teaching
Greats: Aristotle lectures
Ancient Theories of Existence
The Structure of Being
Contact Hours: Wed's 1:00-2:00, DSB 6.03.
Current PhD students supervised
Research summary
BrainMining; Creative Lateral Thinking and Emotional Intelligence; Ancient Philosophy; Contemporary Metaphysics.
Current research interests
Dory’s current research is on the theory of BrainMining - emotive thinking - creative lateral solutions; on the relation of emotions to creative lateral thinking; and on emotions in decision making. He is also developing a theory of Duoist Creative Thinking on the basis of Yijing metaphysical principles of Chinese thought. He leads and participates in research projects for the development of methods for teaching creative lateral thinking in schools. His has further research interests in ancient metaphysics, contemporary metaphysics, and ancient epistemology.Project activity
- Digital Exhibition of Zeno of Citium and Stoicism, for Cyprus' EU Presidency. Within the framework of Cyprus' Presidency of the European Union 2012, the Secretariat for the Presidency and the Cypriot Ministry of Education and Culture funded the creation of an Exhibition of the Ideas of Zeno and Stoicism.
- Creative Emotional Reasoning C2Learn Funded by EU 7th Framework Programme ICT (Information and Communication Technologies, €3.3M) to explore lateral thinking, emotions and creativity.
- Emotions First - The role of emotions in reasoning, Marie Curie Fellow suprvision, Dr Laur Candiotto, 2015-2017.
- Project Archelogos, Mr George David - 3E + Leventis Foundation. (See under Publications below).
- Latest research grants/projects