|
The Twentieth CenturyDate: 2015-10-07; view: 505. In the twentieth century professional philosophy divided up into two streams, sometimes called ‘Analytic' and ‘Continental', and there were periods during which the two schools lost contact with each other. Towards the end of the century, however, there were more philosophers who could speak the languages of both traditions. The beginning of the analytic school is sometimes located with the rejection of a neo-Hegelian idealism by G. E. Moore (1873-1958). One way to characterize the two schools is that the Continental school continued to read and be influenced by Hegel, and the Analytic school (with some exceptions) did not. Another way to make the distinction is geographical; the analytic school is located primarily in Britain, Scandinavia and N. America, and the continental school in the rest of Europe, in Latin America and in certain schools in N. America. Some figures from the Continental school are described first, after which we turn to the analytic school (which is this writer's own). Martin Heidegger (1889-1976) was initially trained as a theologian, and wrote his dissertation on what he took to be a work of Duns Scotus. He took an appointment under Edmund Husserl (1855-1938) at Freiburg, and was appointed to succeed him in his chair. Husserl's program of ‘phenomenology' was to recover a sense of certainty about the world by studying in exhaustive detail the cognitive structure of appearance. Heidegger departed from Husserl in approaching Being through a focus on ‘Human Being' (in German Dasein) as concerned above all for its fate in an alien world, or as ‘anxiety' (Angst) towards death (see Being and Time I.6). In this sense he is the first existentialist, though he did not use the term. Heidegger emphasized that we are ‘thrown' into a world that is not ‘home', and we have a radical choice about what possibilities for ourselves we will make actual. Heidegger drew here from Kierkegaard, and he is also similar in describing the danger of falling back into mere conventionality, what Heidegger calls ‘the They' (das Man). On the other hand he is unlike Kierkegaard in thinking of traditional Christianity as just one more convention that authentic existence requires us to get beyond. In Heidegger, as in Nietzsche, it is hard to find a positive or constructive ethics. Heidegger's position is somewhat compromised, moreover, by his initial embrace of the Nazi party. In his later work he moved increasingly towards a kind of quasi-religious mysticism. His Romantic hatred of the modern world and his distrust of system-building led to the espousal of either silence or poetry as the best way to be open to the ‘something' (sometimes he says ‘the earth') which reveals itself only as ‘self-secluding' or hiding itself away from our various conceptualizations. He held the hope that through poetry, and in particular the poetry of Hölderlin, we might be able to still sense something of the god who appears ‘as the one who remains unknown,' who is quite different from the object of theology or piety, but who can bring us back to the Being we have long lost sight of (Poetry, Language, Thought, 222). Jean-Paul Sartre (1905-80) did use the label ‘existentialist', and said that ‘Existentialism is nothing else than an attempt to draw all the consequences of a coherent atheist position' (Existentialism and Human Emotions, 51). He denied (like Scotus) that the moral law could be deduced from human nature, but this was because (unlike Scotus) he thought that we give ourselves our own essences by the choices we make. His slogan was, ‘Existence precedes essence' (Ibid., 13). ‘Essence' is here the defining property of a thing, and Sartre gave the example of a paper cutter, which is given its definition by the artisan who makes it. Sartre said that when people believed God made human beings, they could believe humans had a God-given essence; but now that we do not believe this, we have realized that humans give themselves their own essences (‘First of all, man exists, turns up, appears on the scene, and, only afterwards, defines himself.' Ibid., 15). On this view there are no outside commands to appeal to for legitimation, and we are condemned to our own freedom. Sartre thought of human beings as trying to be God (on a Hegelian account of what God is), even though there is no God. This is an inevitably fruitless undertaking, which he called ‘anguish'. Moreover, we inevitably desire to choose not just for ourselves, but for the world. We want, like God, to create humankind in our own image, ‘If I want to marry, to have children, even if this marriage depends solely on my own circumstances or passion or wish, I am involving all humanity in monogamy and not merely myself. Therefore, I am responsible for myself and for everyone else. I am creating a certain image of man of my own choosing. In choosing myself, I choose man' (Ibid., 18). To recognize that this project does not make sense is required by honesty, and to hide this from ourselves is ‘bad faith'. One form of bad faith is to pretend that there is a God who is giving us our tasks. Another is to pretend that there is a ‘human nature' that is doing the same thing. To live authentically is to realize both that we create these tasks for ourselves, and that they are futile. The twentieth century also saw, within Roman Catholicism, forms of Christian Existentialism and new adaptations of the system of Thomas Aquinas. Gabriel Marcel (1889-73), like Heidegger, was concerned with the nature of Being as it appears to human being, but he tried to show that there are experiences of love, joy, hope and faith which, as understood from within, give us reason to believe in an inexhaustible Presence, which is God. Jacques Maritain (1882-1973) developed a form of Thomism that retained the natural law, but regarded ethical judgment as not purely cognitive but guided by pre-conceptual affective inclinations. He took a more Hegelian view of history than traditional Thomism, allowing for development in the human knowledge of natural law, and he defended democracy as the appropriate way for human persons to attain freedom and dignity. The notion of the value of the person and the capacities given to persons by their creator was at the center of the ‘personalism' of Pope John Paul II's The Acting Person (1979), influenced by Max Scheler (1874-1928). Natural law theory has been taken up and modified more recently by two philosophers who write in a style closer to the analytic tradition, John Finnis (1940-) and, in a different and incompatible way, Alastair MacIntyre (1929-). Finnis holds that our knowledge of the fundamental moral truths is self-evident, and so is not deduced from human nature. His Natural Law and Natural Rights (1980) was a landmark in integrating the modern vocabulary and grammar of rights into the tradition of Natural Law. MacIntyre, who has been on a long journey back from Marxism to Thomism, holds that we can know what kind of life we ought to live on the basis of knowing our natural end, which he now identifies in theological terms. He is still influenced by a Hegelian historicism, and holds that the only way to settle rival knowledge claims is to see how successfully each can account for the shape taken by its rivals. Michel Foucault (1926-84) followed Nietzsche in aspiring to uncover the ‘genealogy' of various contemporary forms of thought and practice (he was concerned, for example, with sexuality and mental illness), and how relations of power and domination have produced ‘discourses of truth' (“Truth and Power,” Power, 131). In his later work he described four different aspects of the ‘practice of the self' in which we have been engaged. We select the desires, acts, and thoughts that we attend to morally. We recognize ourselves as morally bound by some particular ground, e.g., divine commands, or rationality, or human nature. We transform ourselves into ethical subjects by some set of techniques, e.g., meditation or mortification or consciousness-raising. Finally, we propose a ‘telos' or goal, the way of life or mode of being that the subject is aiming at, e.g., self-mastery, tranquility or purification. Foucault criticized Christian conventions that tend to take morality as a juristic and often universal code of laws, and to ignore the creative practice of self-making. Even if Christian and post-Christian moralists turn their attention to self-expression, he thought they tend to focus on the confession of truth about oneself, a mode of expression which is historically linked to the church and the modern psycho-sciences. Foucault preferred stressing our freedom to form ourselves as ethical subjects, and develop ‘a new form of right' and a ‘non-disciplinary form of power' (“Disciplinary Power and Subjection,” Power, 242). He did not, however, tell us much more about what these new forms would be like. Jurgen Habermas (1929-) proposed a ‘communicative ethics' that develops the Kantian element in Marxism (The Theory of Communicative Action, Vols. I and II). By analyzing the structure of communication (using speech-act theory developed in analytic philosophy) he lays out a procedure that will rationally justify norms, though he does not claim to know what norms a society will adopt by using this procedure. The two ideas behind this procedure are that norms are valid if they receive the consent of all the affected parties in unconstrained practical communication, and if the consequences of the general observance of the norms (in terms of how each person's interests are affected) are acceptable to all. Habermas thinks he fulfills in this way Hegel's aim of reconciling the individual and society, because the communication process extends individuals beyond their private perspectives in the process of reaching agreement. Religious convictions need to be left behind, on this scheme, because they are not universalizable in the way the procedure requires. We are sometimes said to live now in a ‘post-modern' age. This term is problematic in various ways. As used within architectural theory in the 1960's and 1970's it had a relatively clear sense. There was a recognizable style that either borrowed bits and pieces from styles of the past, or mocked the very idea (in modernist architecture) of essential functionality. In philosophy, the term is less clearly definable. It combines a distaste for ‘meta-narratives' and a rejection of any form of foundationalism. The effect on philosophical thinking about the relation between morality and religion is two-fold. On the one hand, the modernist rejection of religion on the basis of a foundationalist empiricism is itself rejected. This makes the current climate more hospitable to religious language than it was for most of the twentieth century. But on the other hand, the distaste for over-arching theory means that religious meta-narratives are suspect to the same degree as any other, and the hospitality is more likely to be towards bits and pieces of traditional theology than to any theological system as a whole. We conclude this section with some movements that are not philosophical in a professional sense, but are important in understanding the relation between morality and religion. Liberation theology, of which a leading spokesman from Latin America is Gustavo Gutiérrez (1928-), has attempted to reconcile the Christian gospel with a commitment (influenced by Marxist categories) to revolution to relieve the condition of the oppressed. The civil rights movement (drawing heavily on Exodus), feminist ethics, animal liberation, environmental ethics, and the gay rights and children's rights movements have shown special sensitivity to the moral status of some particular oppressed class. The leadership of some of these movements has been religiously committed, while the leadership of others has not. At the same time, the notion of human rights, or justified claims by every human being, has grown in global reach, partly through the various instrumentalities of the United Nations. There has, however, been less consensus on the question of how to justify human rights. There are theological justifications, deriving from the image of God in every human being, or the command to love the neighbor, or the covenant between God and humanity. Whether there is a non-theological justification is not yet clear. Finally, there has also been a burst of activity in professional ethics, such as medical ethics, engineering ethics, and business ethics. This has not been associated with any one school of philosophy rather than another. The connection of religion with these developments has been variable. In some cases (e.g., medical ethics) the initial impetus for the new sub-discipline was strongly influenced by theology, and in other cases not. We return, finally, to analytic philosophy, whose origins were associated above with G. E. Moore. His Principia Ethica (1903) can be regarded as the first major ethical document of the school. He was strongly influenced by Sidgwick at Cambridge, but rejected Sidgwick's views about intuitionism. He thought that intrinsic goodness was a real property of things, even though (like the number two) it did not exist in time and was not the object of sense experience. He explicitly aligned himself here with Plato and against the class of empiricist philosophers, ‘to which most Englishmen have belonged' (Principia Ethica, 162). His predecessors, Moore thought, had almost all committed the error, which he called ‘the naturalistic fallacy', of trying to define this value property by identifying it with a non-evaluative property. For example, they proposed that goodness is pleasure, or what produces pleasure. But whatever non-evaluative property we try to say goodness is identical to, we will find that it remains an open question whether that property is in fact good. For example, it makes sense to ask whether pleasure or the production of pleasure is good. This is true also if we propose a supernatural property to identity with goodness, for example the property of being commanded by God. It still makes sense to ask whether what God commands is good. This question cannot be the same as the question ‘Is what God commands what God commands?' which is not still an open question. Moore thought that if these questions are different, then the two properties, goodness and being commanded by God, cannot be the same, and to say (by way of a definition) that they are the same is to commit the fallacy. Intrinsic goodness, Moore said, is a simple non-natural property (i.e. neither natural nor supernatural) and indefinable. He thought we had a special form of cognition that he called ‘intuition', which gives us access to such properties. By this he meant that the access was not based on inference or argument, but was self-evident (though we could still get it wrong, just as we can with sense-perception). He thought the way to determine what things had positive value intrinsically was to consider what things were such that, if they existed by themselves in isolation, we would yet judge their existence to be good. At Cambridge Moore was a colleague of Bertrand Russell (1872-1970) and Ludwig Wittgenstein (1889-1951). Russell was not primarily a moral philosopher, but he expressed radically different views at different times about ethics. In 1910 he agreed with Moore that goodness (like roundness) is a quality that belongs to objects independently of our opinions, and that when two people differ about whether a thing is good, only one of them can be right. By 1922 he was holding an error theory (like that of John Mackie, 1917-81) that although we mean by ‘good' an objective property in this way, there is in fact no such thing, and hence all our value judgments are strictly speaking false (“The Elements of Ethics,” Philosophical Essays). Then by 1935 he had dropped also the claim about meaning, holding that value judgments are expressions of desire or wish, and not assertions at all. Wittgenstein's views on ethics are enigmatic and subject to wildly different interpretations. In the Tractatus (which is about logic) he says at the end, ‘It is clear that ethics cannot be put into words. Ethics is transcendental. (Ethics and aesthetics are one and the same.)' (Tractatus, 6.421). Perhaps he means that the world we occupy is good or bad (and happy or unhappy) as a whole, and not piece-by-piece. Wittgenstein (like Nietzsche) was strongly influenced by Schopenhauer's notion of will, and by his disdain for ethical theories that purport to be able to tell one what to do and what not to do. The Tractatus was taken up by the Logical Positivists, though Wittgenstein himself was never a Logical Positivist. The Logical Positivists held a ‘verificationist' theory of meaning, that assertions can be meaningful only if they can in principle be verified by sense experience or if they are tautologies (for example, ‘All bachelors are unmarried men'). This seems to leave ethical statements (and statements about God) meaningless, and indeed that was the deliberately provocative position taken by A. J. Ayer (1910-89). Ayer accepted Moore's arguments about the naturalistic fallacy, and since Moore's talk of ‘non-natural properties' seemed to Ayer just nonsense, he was led to emphasize and analyze further the non-cognitive ingredient in evaluation which Moore had identified. Suppose one says to a cannibal, ‘You acted wrongly in eating your prisoner.' Ayer thought one is not stating anything more than if one had simply said, ‘You ate your prisoner'. Rather, one is evincing moral disapproval of it. It is as if one had said, ‘You ate your prisoner' in a peculiar tone of horror, or written it with the addition of some special exclamation marks (Language, Truth and Logic, 107-8). The emotivist theory of ethics had its most articulate treatment in Ethics and Language by Charles Stevenson (1908-79). Stevenson was a positivist, but also the heir of John Dewey (1859-1952) and the American pragmatist tradition. Dewey had rejected the idea of fixed ends for human beings, and stressed that moral deliberation occurs in the context of competition within a person between different ends, none of which can be assumed permanent. He criticized theories that tried to derive moral principles from self-certifying reason, or intuition, or cosmic forms, or divine commands, both because he thought there are no self-certifying faculties or self-evident norms, and because the alleged derivation disguises the actual function of the principles as devices for social action. Stevenson applied this emphasis to the competition between people with different ends, and stressed the role of moral language as a social instrument for persuasion (Ethics and Language, Ch. 5). On his account, normative judgments express attitudes and invite others to share these attitudes, but they are not strictly speaking true or false. Wittgenstein did not publish any book after the Tractatus, but he wrote and taught; and after his death Philosophical Investigations was published in 1953. The later thought of Wittgenstein bears a similar relation to Logical Positivism as the relation Heidegger bears to Husserl. In both cases the quest for a kind of scientific certainty was replaced by the recognition that science is itself just one language, and not in many cases prior by right. The later Wittgenstein employed the notion of ‘forms of life' in which different ‘language games' are at home (Philosophical Investigations, §7), and probably included religion as a form of life (though scholars disagree about this). In Oxford there was a parallel though distinct development centering round the work of John Austin (1911-60). Austin did not suppose that ordinary language was infallible, but he did think that it preserved a great deal of wisdom that had passed the test of centuries of experience, and that traditional philosophical discussions had ignored this primary material. In How to do Things with Words (published posthumously) Austin labeled ‘the descriptive fallacy' the mistake of thinking that all language is used to perform the act of describing or reporting, and he attributed the discovery of this fallacy to Kant (How to do Things with Words, 3). R. M. Hare (1919-2002) took up the diagnosis of this fallacy, and proposed a ‘universal prescriptivism' which attributed three characteristics to the language of morality. First, it is prescriptive, which is to say that moral judgments express the will in a way analogous to commands. This preserves the emotivist insight that moral judgment is different from assertion, but does not deny the role of rationality in such judgment. Second, moral judgment is universalizable. This is similar to the formula of Kant's categorical imperative that requires that we be able to will the maxims of our actions as universal laws. Third, moral judgment is overriding. This means that moral prescriptions legitimately take precedence over any other normative prescriptions. In Moral Thinking (1981) Hare claimed to demonstrate that utilitarianism followed from these three features of morality, though he excluded ideals (in the sense of preferences for how the world should be independently of the agent's experience) from the scope of this argument. God enters in two ways into this picture. First, Hare proposed a figure he calls ‘the archangel' who is the model for fully critical (as opposed to intuitive) moral thinking, having full access to all the relevant information and complete impartiality between the affected parties. Hare acknowledged that since archangels (e.g., Lucifer) are not reliably impartial in this way, it is really God who is the model. Second, we have to be able to believe (as Kant argued) that the universe sustains morality in the sense that it is worthwhile trying to be morally good. Hare thought that this requires something like a belief in Providence (“The Simple Believer,” Essays on Religion and Education, 22-3). The most important opponent of utilitarianism in the twentieth century was John Rawls (1921-2005). In his Theory of Justice (1971) he gave, like Hare, a basically Kantian account of ethics. But he insisted that utilitarianism does not capture the Kantian insight that each person is an end in himself or herself, because it ‘does not take seriously the distinction between persons' (Theory of Justice, 22). He constructed the thought experiment of the ‘Original Position' in which individuals imagine themselves not knowing what role in society they are going to play or what endowments of talent or material wealth they possess, and agree together on what principles of justice they will accept. Rawls thought it important that substantive conceptions of the good life were left behind in moving to the Original Position, because he was attempting to provide an account of justice that people with competing visions of the good could agree to in a pluralist society. Like Habermas he included religions under this prohibition. In Political Liberalism (1993) he conceded that the procedure of the Original Position is itself ideologically constrained, and he moved to the idea of an overlapping consensus: Kantians can accept the idea of justice as fairness (which the procedure describes) because it realizes autonomy, utilitarians because it promotes overall utility, Christians because it is part of divine law, etc. But even here Rawls wanted to insist that adherents of the competing visions of the good leave their particular conceptions behind in public discourse and justify the policies they endorse on grounds that are publicly accessible. He described this as the citizen's duty of civility (Political Liberalism, iv). In closing the section of this article on the continental school, we talked briefly about postmodernism. Within analytic philosophy the term is less prevalent. But both schools live in the same increasingly global cultural context. In this context we can reflect on the two main disqualifiers of the project of relating religion intimately to morality that seemed to emerge in the nineteenth and twentieth centuries. The first disqualifier is the prestige of natural science, and the attempt to make it foundational for all human knowledge. The various empiricist, verificationist, and reductionist forms of foundationalism have not yet succeeded, and even within modern philosophy there has been a continuous resistance to them. This is not to say that they will not succeed in the future (for example we may discover a foundation for ethics in the theory of evolution), but the confidence in their future success has waned. Moreover, the secularization hypothesis that religion would wither away with increasing education seems to have been false. Certainly parts of Western Europe are less attached to traditional institutional forms of religion. But taking the world as a whole, religion seems to be increasing in influence rather than declining as the world's educational standards improve. The second main disqualifier is the liberal idea (present in the narrative of this article from the time of the religious wars in Europe) that we need a moral discourse based on reason and not religion in order to avoid the hatred and bloodshed that religion seems to bring with it. Here the response to Rawls has been telling. It seems false that we can respect persons and at the same time tell them to leave their fundamental commitments behind in public discourse, and it seems false also that some purely rational component can be separated off from these competing substantive conceptions of the good (c.f. Wolterstorff, “An Engagement with Rorty”). It is true that religious commitment can produce the deliberate targeting of civilians in a skyscraper. But the history of the twentieth century, the bloodiest century of our history, is that non-religious totalitarian regimes have at least as much blood on their hands. Perhaps the truth is, as Kant saw, that people under the Evil Maxim will use any available ideology for their purposes. Progress towards civility is more likely if Muslims, Christians, Jews, (and Buddhists and Hindus) are encouraged to enter ‘the public square' with their commitments explicit, and see how much common ethical ground there in fact is. This writer has done some of this discussion, and found the common ground surprisingly extensive, though sometimes common language disguises significant differences. Progress seems more likely in this way than by trying to construct a neutral philosophical ground that very few people actually accept. We end with a recent development in analytic ethical theory, a revival of divine command theory parallel to the revival of natural law theory already described. A pioneer in this revival was Philip Quinn's Divine Command and Moral Requirements (1978). He defended the theory against the usual objections (one, deriving from Plato's Euthyphro, that it makes morality arbitrary, and the second, deriving from a misunderstanding of Kant, that it is inconsistent with human autonomy), and proposed that we understand the relation between God and moral rightness causally, rather than analyzing the terms of moral obligation as meaning ‘commanded by God'. Though we could stipulate such a definition, it would make it obscure how theists and non-theists could have genuine moral discussion, as they certainly seem to do. Richard Mouw's The God Who Commands (1990) locates divine command theory within the Calvinist tradition, and also stresses the background of divine command against the narrative of God's work in saving human beings. Robert M. Adams, in a series of articles and then in Finite and Infinite Goods (1999), first separates off the good (which he analyzes Platonically in terms of imitating the ultimate good, which is God) and the right. He then defends a divine command theory of the right by arguing that obligation is always obligation to someone, and God is the most appropriate person, given human limitations. John Hare, in God and Morality, defends a somewhat similar theory, as well as giving a fuller history than the present article allows of the different conceptions of the relation between morality and religion. Thomas L. Carson's Value and the Good Life (2000) argues that normative theory needs to be based on an account of rationality, and then proposes that a divine-preference account of rationality is superior to all the available alternatives. An objection to divine command theory is mounted by Mark Murphy's An Essay on Divine Authority (2002) on the grounds that divine command only has authority over those persons who have submitted themselves to divine authority, but moral obligation has authority more broadly. Finally, Linda Zagzebski's Divine Motivation Theory (2004) proposes, as an alternative to divine command theory, that we can understand all moral normativity in terms of the notion of a good emotion, and that God's emotions are the best exemplar. To conclude, this revival of interest in divine command theory, when combined with the revival of natural law theory already discussed, shows evidence that the attempt to connect morality closely to religion is undergoing a robust recovery within professional philosophy.
|