The Metaphysics of Good and Evil (London, New York: Routledge, 2020)
The Metaphysics of Good and Evil is the first full-length contemporary defence, from the perspective of analytic philosophy, of the Scholastic theory of good and evil – the theory of Aristotle, Augustine, Aquinas, and most medieval and Thomistic philosophers. Goodness is analysed as obedience to nature. Evil is analysed as the privation of goodness. Goodness, surprisingly, is found in the non-living world, but in the living world it takes on a special character. The book analyses various kinds of goodness, showing how they fit into the Scholastic theory. The privation theory of evil is given its most comprehensive contemporary defence, including an account of truthmakers for truths of privation and an analysis of how causation by privation should be understood. In the end, all evil is deviance – a departure from the goodness prescribed by a thing’s essential nature.
Opting Out: Conscience and Cooperation in a Pluralistic Society (London: Institute of Economic Affairs, 2018)
We live in a liberal, pluralistic, largely secular society where, in theory, there is fundamental protection for freedom of conscience generally and freedom of religion in particular. There is, however, both in statute and common law, increasing pressure on religious believers and conscientious objectors (outside wartime) to act in ways that violate their sincere, deeply held beliefs. This is particularly so in health care, where conscientious objection is coming under extreme pressure. I argue that freedom of religion and conscience need to be put on a sounder footing both legislatively and by the courts, particularly in health care. I examine a number of important legal cases in the UK and US, where freedom of religion and conscience have come into conflict with government mandates or equality and anti-discrimination law. In these and other cases we find one of two results: either the conscientious objector loses out against competing rights, or the conscientious objector succeeds, but due to what I consider unsound judicial reasoning. In particular, cases involving cooperation in what the objector considers morally impermissible according to their beliefs have been wrongly understood by some American courts. I argue that a reasonable theory of cooperation incorporated into judicial thinking would enable more acceptable results that gave sufficientprotection to conscientious objectors without risking a judicial backlash against objectors who wanted to take their freedoms too far.
I also venture into broader, more controversial waters concerning what I call freedom of dissociation – the fundamental right to withdraw from associating with people, groups, and activities. It is no more than the converse of freedom of association, which all free societies recognise as a basic right. How far should freedom of dissociation go? What might society be like if freedom of dissociation were given more protection in law than it currently has? It would certainly give freedom of religion and conscience a substantial foundation, but it could also lead to discriminatory behaviour to which many people would object. I explore some of these issues, before going back to the narrower area of freedom of conscience and religion in health care, making some proposals about how the law could strengthen these basic pillars of a liberal, free society.
Real Essentialism (London, New York: Routledge, 2007)
This book sets out a system of realist metaphysics in the Aristotelian tradition, applying it to fundamental metaphysical and scientific problems. First, the theory is contrasted with the contemporary essentialism of Saul Kripke, Hilary Putnam, and their followers, which is shown to be inadequate to the task of justifying real, objective, knowable essences. Next, the book criticizes the anti-essentialism of Locke, Quine, Wittgenstein, and Popper. After a further defence of the reality and knowability of essence, the system of real essentialism is laid out, beginning with a defence of hylemorphism - prime matter and substantial form as the foundation of essence. There follows an account of substance, classification, individuation, and identity. Essence and existence, powers, and laws of nature are then analyzed, followed by properties, artefacts, and origins. The book concludes by applying real essentialism in great depth to three central problems at the interface of science and metaphysics: the nature of life, the reality of biological species, and the essence of the person.
Moral Theory: A Non-Consequentialist Approach (Oxford: Blackwell, 2000)
This book sets out the basic system used to solve moral problems, the system that consequentialists deride as 'traditional morality'. The central concepts, principles and distinctions of traditional morality are explained and defended: rights; justice; the good; virtue; the intention/foresight distinction; the acts/omissions distinction; and, centrally, the fundamental value of human life.
Applied Ethics: A Non-Consequentialist Approach (Oxford: Blackwell, 2000)
This book focuses the central concepts of traditional morality - rights, justice, the good, virtue, and the fundamental value of human life - on a number of pressing contemporary problems, including abortion, euthanasia, animals, capital punishment, and war.
The Metaphysics of Identity over Time (London/New York: Palgrave Macmillan/St Martin's Press, 1993)
This book is a systematic investigation into the metaphysical foundations of identity over time. I elaborate and evaluate the most common theory about the persistence of objects through time and change, namely the classical theory of spatio-temporal continuity. I show how the theory requires an ontology of temporal parts, according to which objects are made up of temorally extended segments or stages. This ontology is criticized as unwarranted by modern space-time physics, and as internally incoherent. I argue that identity over time should be seen as a primitive or unanalyzable phenomenon, and that the so-called puzzle cases and paradoxes of identity can be dealt with without recourse to such an ontology.
Classifying Reality (Oxford: Wiley-Blackwell, 2013; 136pp.)
The Old New Logic: Essays on the Philosophy of Fred Sommers (Cambridge, MA: MIT Press, 2005; 242pp.)
(ed. with T. Chappell) Human Values: New Essays on Ethics and Natural Law (London: Palgrave Macmillan, 2004; 272pp; rev. ed. p/back 2007)
Form and Matter: Themes in Contemporary Metaphysics (Oxford: Blackwell, 1999; 119 pp.)
(ed. with Jacqueline A. Laing) Human Lives: Critical Essays on Consequentialist Bioethics (London: Palgrave Macmillan/New York: St Martin's Press, 1997; 244 pp.)
'Mistake-Making: A Theoretical Framework for Generating Research Questions in Biology, With Illustrative Application to Blood Clotting', The Quarterly Review of Biology 97 (2022): 2-13
It is a matter of contention whether or not a general explanatory framework for the biological sciences would be of scientific value, or whether it is even achievable. In this paper we suggest that both are the case, and we outline proposals for a framework capable of generating new scientific questions. Starting with one clear characteristic of biological systems—that they all have the potential to make mistakes—we aim to describe the nature of this potential and the common processes that lie behind it. Given that under most circumstances biological systems function effectively, an examination of different kinds of mistake-making provides pointers to mechanisms that must exist to make failure uncommon. This, in turn, informs a framework for systematic inquiry, which in this paper we apply to the hemostatic system, but we believe could be applied to any system across biology.
This paper tests the hypothesis that the prime matter of classical Aristotelian-Scholastic metaphysics is numerically identical to energy. Is P = E? After outlining the classical Aristotelian concept of prime matter, I provide the master argument for it, based on the phenomenon of substantial change. I then outline what we know about energy as a scientific concept, including its role and application in some key fields. Next, I consider the arguments in favour of prime matter being identical to energy, followed by the arguments against this. The method used is that of ontological profile comparison: does the profile of prime matter match, in key features, that of energy? An affirmative answer, that P = E, would be a momentous discovery: it would show that one of the most neglected and derided ideas of pre-modern metaphysics—a contributor to its downfall in the wake of the Scientific Revolution—was correct all along. From a negative answer, we would still learn much about the interaction of science and metaphysics. It turns out, however, given what we currently know, that the answer is not quite as simple as one might hope.
'Restoring the Hierarchy of Being', in William M.R. Simpson, Robert C. Koons and James Orr (eds) Neo-Aristotelian Metaphysics and the Theology of Nature (London: Routledge, 2021): 94-124.
The idea of the Great Chain of Being was a foundation stone of philosophy from Plato to the nineteenth century. Under the onslaught of various schools of thought, the idea was eventually banished from philosophy, retreating into the corners of esotericism. The idea of an ontological hierarchy, however, merits reappraisal. Rechristening it more prosaically as the hierarchy of being and pruning it of its wilder and less plausible offshoots, we find a concept worthy of reconsideration. After surveying the historical fate of the hierarchy, focusing of the famous work of Arthur Lovejoy, and then identifying the reasons for its demise, I develop a rigorous definition of metaphysical superiority deriving from an understanding of the hierarchy as found in Aristotle and Aquinas. The definition is exemplified by cases taken from the hierarchy and defended against challenges and possible counterexamples. Having argued that my definition survives these objections, I conclude that the concept of a hierarchy of being still has much to commend it, deserving serious reconsideration and perhaps even reinstatement at the core of sound philosophy.
'Formal Causation: Accidental and Substantial', in Ludger Jansen and Petter Sandstad (eds) Neo-Aristotelian Perspectives on Formal Causation (London: Routledge, 2021): 40-61.
'Finality Revived: Powers and Intentionality', Synthese 194 (2017): 2387-2425.
Proponents of physical intentionality argue that the classic hallmarks of intentionality highlighted by Brentano are also found in purely physical powers. Critics worry that this idea is metaphysically obscure at best, and at worst leads to panpsychism or animism. I examine the debate in detail, ﬁnding both confusion and illumination in the physical intentionalist thesis. Analysing a number of the canonical features of intentionality, I show that they all point to one overarching phenomenon of which both the mental and the physical are kinds, namely ﬁnality. This is the ﬁnality of ‘ﬁnal causes’, the long-discarded idea of universal action for an end to which recent proponents of physical intentionality are in fact pointing whether or not they realise it. I explain ﬁnality in terms of the concept of speciﬁc indifference, arguing that in the case of the mental, speciﬁc indifference is realised by the process of abstraction, which has no correlate in the case of physical powers. This analysis, I conclude, reveals both the strength and weakness of rational creatures such as us, as well as demystifying (albeit only partly) the way in which powers work.
'Being and Goodness', American Philosophical Quarterly 51 (2014): 345-56
The article defends the scholastic principle of the convertibility of being and goodness. First, it identifies the non-moral sense of ‘good’ as the fulfilment of appetite, where appetites are the natural tendencies of objects, whether living or non-living, to or away from certain end states. The good, in its primary meaning, applies to all cases in which some appetite is fulfilled. The article then analyses a central case of inorganic fulfilment, centred on the idea of instantiation – in particular, being a good example of some geometrical kind. It argues that the goodness in a case of instantiation, where there is a standard to be met, is an irreducible kind of goodness. Next, the article argues that continuation in existence is also a kind of fulfilment of appetite possessed by all enduring things. The primary counterexample involves radioactive decay: the article deflects this counterexample by showing that it does not undermine the idea that every enduring object has a tendency to continue in existence. Once we appreciate the place of appetite fulfilment across the natural world, we are more easily able to understand organic fulfilment – goodness for a thing. Without a kind-neutral concept of goodness that applies to all being, organic goodness is far more difficult to account for.
'All for the Good', Philosophical Investigations 38 (2015): 72-95
The Guise of the Good thesis has received much attention since Anscombe’s brief defence in her book Intention. I approach it here from a less common perspective - indirectly, via a theory explaining how it is that moral behaviour is even possible. After setting out how morality requires the employment of a fundamental test, I argue that moral behaviour involves orientation toward the good. Immoral behaviour cannot, however, involve orientation to evil as such, given the theory of evil as privation. There must always be orientation to good of some kind for immorality even to be possible. Evil can, nevertheless, be intended, but this must be carefully understood in terms of the metaphysic of good and evil I set out. Given that metaphysic, the Guise of the Good is a virtual corollary.
'Could There Be a Superhuman Species?', The Southern Journal of Philosophy 52 (2014): 206-26
Transhumanism is the school of thought that advocates the use of technology to enhance the human species, to the point where some supporters consider that a new species altogether could arise. Even some critics think this at least a technological possibility. Some supporters also believe the emergence of a new, improved, superhuman species raises no special ethical questions. Through an examination of the metaphysics of species, and an analysis of the essence of the human species, I argue that the existence of an embodied, genuinely superhuman species is a metaphysical impossibility. Finally, I point out an interesting ethical consideration that this metaphysical truth raises.
'The Metaphysics of Privation', in R. Hüntelmann and J. Hattler (eds) New Scholasticism Meets Analytic Philosophy (Heusenstamm: Editiones Scholasticae, 2014): 63-88
In this paper I defend the theory of evil as a privation of good. I distinguish between privations and mere absences, then consider both the totality and exclusion accounts of truthmakers for negative truths. The totality account is less promising than the exclusion account especially as concerns privative truths, but any account involving positive states as partially responsible for the truth of a privative must invoke needs, and more generally the potencies that make needs possible. I propose a conjunctive analysis of privation, whereby it involves both an absence and a need. Since privations are partly negative they cannot be real causes or effects, but this does not make them illusory either. Privations (and hence evils) are conceptual beings with a foundation in positive reality.
'The World is not an Asymmetric Graph', Analysis 71 (2011): 3-10
Randall Dipert argues in an important 1997 article that, for the first time in the history of philosophy, the world can be proven to be a pure structure of relations, i.e. that all that exists is relational in nature. Alexander Bird has applied the same mathematical technique used by Dipert, namely the theory of asymmetric graphs, to argue that at the ‘fundamental level’ there exist only pure powers, all relationally defined. I argue that both positions are subject to counterintuitive consequences concerning existence and identity that make it unreasonable to believe the world, in its entirety or at the fundamental level, is identical to an asymmetric graph.
'The Metaphysical Foundations of Natural Law', in H. Zaborowski (ed.) Natural Moral Law in Contemporary Society (Washington, DC: Catholic University of America Press, 2010): 44-75
Natural law theory, classically understood, has a robust metaphysical foundation centred on the idea of cosmic order. This is not just an order within human nature but also one that structures the entire world, thereby making it possible for humans to act morally. Cosmic order is both intrinsic and necessary. Randomness is either purely epistemic, or else an objective but relative phenomenon defined by the grade or level of order against which it is compared. There is, however, a deep conceptual connection between order and ordinance - between law and lawgiver. All laws require law-givers, and even if the law is a metaphysical fact about the intrinsic nature of the world rather than a mere act of will, it requires imposition on the world by a lawgiver. Several arguments are advanced for this view. Natural law theory, then, requires the extrinsic ordination of all the laws of nature, of which the natural moral laws are but a part. But when it comes to morality, an essentialist metaphysic of human nature takes centre stage. In defending human nature essentialism, we uncover the flaws in the so-called ‘new’ natural law theory of Finnis, Grisez, and Boyle.
'Persistence', in J. Kim, E. Sosa, and G. Rosenkrantz (eds) A Companion to Metaphysics, 2nd ed. (Oxford: Wiley-Blackwell, 2009): 55-65
This article surveys the key issues in the contemporary metaphysical debate about persistence. First, the classic spatio-temporal continuity account is outlined and questions raised. Then the four-dimensionalist temporal part theory is sketched and subjected to critique. A more exotic temporal-parts variant called ‘stage theory’ or ‘exdurance’ is described, with serious worries suggested for it. The theory of endurance is then presented as a more appealing approach albeit not without its own issues. Finally, the ‘problem of temporary intrinsics’ is set out and the question broached of whether it is primarily a metaphysical or a semantic problem. Although the suggestion is that it is largely metaphysical, a semantic proposal - ‘sententialism’ - is suggested for dealing with the semantic concerns.
'The Non-Identity of the Categorical and the Dispositional', Analysis 69 (2009): 677-684
This article is a response to Galen Strawson’s ‘The Identity of the Categorical and the Dispositional’ in Analysis 68 (2008):271-82. There, Strawson argues that there is no real distinction between dispositional and categorical properties and no real distinction between ‘an object and its propertiedness’. I argue that Strawson is wrong on several counts. First, he is mistaken to think that the inseparability of two things entails their identity: real distinctions do not entail separability, so the inseparability of the dispositional and the categorical does not mean they are identical. Secondly, his claim that ‘all being is actual being’, so there is no real distinction between the actual and the potential, misconceives the distinction between actuality and potentiality and involves a number of other misinterpretations of what the believer in real potentiality asserts. Thirdly, he is wrong to claim that there is no real distinction between ‘an object and its propertiedness’. Such a claim fails even if one appeals, as he does, to temporal parts theory. But it also fails on more plausible understandings of what it is to be a bearer of properties.
'The Doctrine of Double Effect', in T. O'Connor and C. Sandis (eds) A Companion to the Philosophy of Action (Oxford: Wiley-Blackwell, 2010): 324-30.
This is an outline of one of the most famous principles of moral reasoning, equally defended and strongly criticised for centuries. It is a cornerstone of Catholic moral theology, but is arguably central to any moral philosophy that takes the doing of good and the avoidance of evil to be paramount. How is it possible both to do good and avoid evil given that much of what we do has both good and bad results? DDE is a codification of the rules governing such actions. It allows the doing of good and evil in certain cases, but forbids it in others. I set out the rules comprised by DDE, looking at each in turn with application to examples. Many of the criticisms of the doctrine are based on simple misunderstandings of what it does and does not say, so I provide important clarifications. I also take on some of the more substantive criticisms, showing how the defender of DDE can respond to them.
(with J.A. Laing) 'Artificial Reproduction, the "Welfare Principle", and the Common Good', Medical Law Review 13 (2005): 328-56
This article challenges the view most recently expounded by Emily Jackson that ‘decisional privacy’ ought to be respected in the realm of artificial reproduction (AR). On this view, it is considered an unjust infringement of individual liberty for the state to interfere with individual or group freedom artificially to produce a child. It is our contention that a proper evaluation of AR and of the relevance of welfare will be sensitive not only to the rights of ‘commissioning parties’ to AR but also to public policy considerations. We argue that AR has implications for the common good, by involving matters of human reproduction, kinship, race, parenthood and identity. In this paper we challenge presuppositions concerning decisional privacy. We examine the essential commodification of human life implicit in AR and the systematicity that makes this possible. We address the objection that it is an ethically neutral way of having children and consider the problem of ‘existential debt’. After examining objections to the thesis that AR is illegitimate for reasons of public policy and the common good, we return to the issue of decisional privacy in the light of considerations concerning the legitimate role of the state in matters affecting human reproduction.
Applied ethics is dominated by consequentialist thinking. Other theories, such as rights-based ones, have a lesser presence. Pragmatism is common, usually tied to loose consequentialist ideas. Natural law theory, however, has been conspicuously absent from debate. Genetic engineering is a prime example of a subject to which natural law theory has so far made little contribution. I examine and expose the misunderstandings of the concept of the natural that have led many applied ethicists to think natural law theory discredited. I outline some key features of the theory, contrasting it with consequentialism, and set out some of the objectionable practices to which the latter is committed when it comes to genetics in general and reproductive technology in particular. I go on to argue that whilst natural law theory cannot rule out all forms of genetic engineering, still it can provide a radical critique of certain kinds of intervention in the natural world based on the distorted manner of living to which many societies and individuals are prone.
There are two methodologically distinct but complementary ways of approaching natural law theory. One is a mainly agent-centred approach, focusing on practical reasoning and the intelligibility of action, paying attention to human tendencies and inclinations. The other, more traditional and mainly world-centred approach, focuses on the metaphysics of the good by means of an analysis of human nature and human faculties, and of the way in which the good must be structured for it to be an object of human pursuit, more specifically, a foundation for moral decision- making. In this paper I take a primarily world-centred approach to fundamental questions of content and structure. First, looking at some typical examples from natural law theorists of lists of the basic goods, I analyse various members that they have proposed to see what the correct list must contain. Next, I look at questions concerning how the natural law must be structured, in particular whether the list of basic goods is finite, whether there is a supreme or superordinate good, and what kinds of hierarchical relations within and across goods must exist in order for the basic goods to serve as a foundation for practical reasoning about morality. The general conclusions I draw are, first, that ontology must be taken seriously, and miscategorization avoided, when identifying basic goods; secondly, that the structure of the good requires a system of principles enabling various kinds of comparative judgment within and across goods, in order for natural law theory to serve as a basis for guiding concrete moral decisions.
Things change. If anything counts as a datum of metaphysics, that does. Hence any correct theory of persistence must be consistent with the existence of change. Yet temporal part theory, alias four-dimensionalism, does not satisfy this basic requirement. First I outline the theory in its standard form. I then show why, contra Mark Heller, there can be no argument for temporal parts based on the Indiscernibility of Identicals, which itself presupposes facts of identity rather than grounds them. Nor does David Lewis's so-called 'problem of temporary intrinsics' give any support to four-dimensionalism. As far as the semantics of change goes, I advocate (as against adverbialism and relationalism) 'sententialism': temporal expressions such as 'at t' or 'from t1 to t2' operate on whole sentences and may not be dropped from sentences expressing change without thereby entailing contradiction. Finally, although not every argument for the inconsistency of four-dimensionalism and change succeeds, I argue, via a discussion of Lombard and van Inwagen, that there is indeed such an inconsistency: temporal part theory is a replacement theory, whereby nothing ever does, literally, change.
'The Ethics of Co-operation in Wrongdoing', in A. O'Hear (ed.), Modern Moral Philosophy (Cambridge: Cambridge University Press, 2004; Royal Institute of Philosophy Annual Lecture Series 2002-3): 203-27.
The ethics of co-operation in another's wrongdoing is an under-explored area of moral philosophy. Yet co-operation is pervasive throughout the world of action and its evaluation is a specialized area of ethics. It is a test of any normative moral theory that it possess the conceptual tools for analysing cases of co-operation and yielding persuasive moral judgments about them. This paper sets out a theory of co-operation based on traditional moral categories derived from the natural law. First I distinguish kinds of co-operation. Then I discuss the lawfulness of co-operation, focusing on the crucial distinction between formal and material co-operation. The former is always wrong; the latter is sometimes permissible. Discussion of concrete cases shows how the relevant principles are to be applied. Analysis also reveals that the employment of those principles involves nothing other
than the application of the famous Principle of Double Effect. That the ethics of co-operation is but a special case of PDE constitutes indirect evidence of the plausibility of double effect reasoning. Moreover, the subtlety of the questions involved precludes anything like a consequentialist theory's being of any use in solving problems of co-operation.
Central to recent debate over the Kalam Cosmological Argument, and over the origin of the universe in general, has been the issue of whether the universe began to exist, and if so how this is to be understood. Adolf Grünbaum has used two cosmological models as a basis for arguing that the universe did not begin to exist according to either of them. In this paper I argue that he is wrong on both counts, concentrating on the second, “open interval” model. I give metaphysical considerations for rejecting Grünbaum’s interpretation, and offer a definition of the beginning of existence of an object which improves on prior formulations and which is adequate to show how the universe can indeed be seen to have begun to exist. I conclude with more general metaphysical discussion of the beginning of the universe and of the Kalam Cosmological Argument.
A common argumentative strategy employed by anti-reductionists involves claiming that one kind of entity cannot be identified with or reduced to a second because what can intelligibly be predicated of one cannot be predicated intelligibly of the other. For instance, it might be argued that mind and brain are not identical because it makes sense to say that minds are rational but it does not make sense to say that brains are rational. The scope and power of this kind of argument – if valid – are obvious; but if it turns out that ‘It makes sense to say that…’ creates an opaque context, such arguments will fail. I analyse a possible counterexample to validity and show that it is not conclusive, as it depends on what syntactical construction is given to the premises. This leads to the general observation that the argument form under consideration works for some constructions but not others, and thus to the conclusion that further analysis of intelligibility is called for before it can be known whether the argumentative strategy is open to the anti-reductionist or not.
Here I reply point by point to Graham Oppy's critique of the first part of my paper, 'Traversal of the Infinite, the "Big Bang" and the Kalam Cosmological Argument', arguing that none of his criticisms are sound. (Oppy's critique is 'The Tristram Shandy Paradox: A Response to David S. Oderberg', Philosophia Christi 4 (2002) 335-49.)
Debate over the Kalam Cosmological Argument (KCA) has flourished in recent years due to the impressive work of William Lane Craig. The basic argument - that the universe has a cause (viz. God) because the universe began to exist and whatever begins to exist has a cause of its beginning to exist - has excited vigorous criticism on various fronts. The aim of this paper is twofold: (a) to survey and evaluate that aspect of the KCA which relies on the claim that the universe as actual infinite cannot be formed by successive addition (i.e. cannot be 'traversed'); (b) to survey and evaluate that aspect of the argument which relies on the claim that whatever begins to exist must have a cause of the beginning of its existence. I canvass and refute criticisms of both claims, providing positive arguments to show that the claims are true. On both scores, then, the KCA stands unrefuted.
For all the attention given to the revival of essentialism based on the work of Plantinga, Kripke, Putnam and others, what we have really seen is the coming to prominence of an ontologically thin, unsystematic set of ideas with little theoretical cohesion or metaphysical underpinning, motivated primarily by considerations in modal logic and philosophy of language. I contrast contemporary essentialism of this kind with a metaphysically more robust, neo-Aristotelian or 'real' essentialism which attributes real essences to kinds of object. After rejecting some common Quinean sceptical arguments against the very idea of de re necessity, I outline and briefly defend three central tenets of real essentialism derived from the relation between: essence and identity; essence and existence; essence and property. I end by locating real essentialism within a broader programme called philosophical traditionalism.
This paper responds to Graham Oppy's challenge to the Kalam Cosmological Argument entitled 'Time, Successive Addition and Kalam Cosmological Arguments', in Philosophia Christi 3 (2001), pp.181-91. I look specifically at attacks on the argument based on the confusion of actual and potential infinity, scepticism about processes, doubt about whether the beginning of the universe can be adequately formulated, and the impossibility of traversing the infinite.
Freedom of belief is one of the entrenched values in modern society. Interpreted as the right not to be coerced into believing something, it is surely correct. But most people take it to mean that there is a right to false belief, a right to be wrong. People think that freedom of thought is a good thing, and this must include the freedom to make mistakes. It is also often thought that making mistakes is a life-enhancing and essential part of personal development. I argue that these ideas are false. Beginning with an examination of the basic good of truth, and making comparisons with other goods like health and friendship, I argue that there is a duty to believe only the truth, which thus logically excludes the right also to believe falsehood. I distinguish between the strict wrongness of false belief and the fact that, because of our epistemic limitations, we are not always to be blamed for our false beliefs. Even in the case of those beliefs which are involuntary, there is no right to have them if they are false, even though we are not to be blamed for having them. The right to be wrong, I conclude, is a modern myth.
This paper is a detailed study of what are traditionally called the cardinal virtues: prudence, justice, temperance and fortitude. I defend what I call the Cardinality Thesis, that the traditional four and no others are cardinal. I define cardinality in terms of three sub-theses, the first being that the cardinal virtues are jointly necessary for the possession of every other virtue, the second that each of the other virtues is a species of one of the four cardinals, and the third that many of the other virtues are also auxiliaries of one or more cardinals. I provide abstract arguments for each sub-thesis, followed by illustration from concrete cases. I then use these results to shed light on the two fundamental problems of the acquisition of the virtues and their unity, proving some further theses in the latter case.
One of the objections raised to the Kalam Cosmological Argument (KCA) (the universe began to exist; whatever begins to exist has a cause of the beginning of its existence; therefore the universe has a cause of the beginning of its existence [which, by further argument, is claimed to be God]) is that the universe did not begin to exist. Against William Lane Craig, KCA's stoutest current defender, Adolf Grünbaum proposes two models of the universe, both of which have adherents, and neither of which implies that the universe began to exist. Craig has already replied to Grünbaum, but his remarks on the first cosmological model are more effective than those on the second. In this paper I outline the two models and show contra Grünbaum that they both imply the universe did begin to exist. I argue that clarification of the concept of a beginning, which Grünbaum misunderstands, helps us to see why his second cosmological model does not undermine the first premise of KCA.
This paper seeks to refute the notorious charge made (by Geach, Urmson et al.) against Aristotle that he is guilty of a quantifier shift fallacy at the beginning of the Nicomachean Ethics, when he argues: ‘Every art and every inquiry, and similarly every action and pursuit, is thought to aim at some good; and for this reason the good has rightly been declared to be that at which all things aim.’ I show that Aristotle does indeed argue in the way Geach asserts, but that, contrary to his accusers, the reasoning is not fallacious. Second-order logic is used to show why this is so, and further observations on the nature of the good are made in light of the acquittal of Aristotle.
This paper is a critique of the metaphysical claims underlying Peter Singer and Helga Kuhse's arguments for the permissibility of human embryo experimentation. I refute their objections to the claim that the zygote and embryo are individual human beings, which objections are based on phenomena associated with fission, totipotency, cloning, and parthenogenesis. Once we understand the metaphysics behind such phenomena, we can see that conception is an ontologically special event, contrary to Singer and Kuhse. The moral status of the zygote and embryo is not undermined by the specious metaphysical arguments considered here.
This paper sets out to answer the question: Does a person who kills another at the latter's request commit an injustice against that person? Defining justice in terms of rights, it is argued that the right to life is inalienable; hence, that voluntary euthanasia is always an injustice by the killer against the killed. Inalienability is justified in three ways. First, it is shown that there is nothing peculiar in the concept of an inalienable right and that even consequentialism must recognize at least one such right. Secondly, some plausible examples of inalienable rights are considered and a purported refutation of inalienability based on the analogy with property rights is dismissed. Thirdly, a positive account is outlined, according to which the right to life is seen as fundamental to a theory of human good.
This paper defends the thesis (called the Substance Thesis) that no two substances belonging to the same substantial kind can be in the same place at the same time. First, weaker theses allowing certain types of coincidence are distinguished and demonstrated. Secondly, the Substance Thesis is elaborated and a failed attempt to refute it is discussed. Thirdly, the thesis is shown not to be refuted by cases (called Leibnizian) of coinciding, ontologically dependent objects. Fourthly, it is shown in terms of discussion of key examples why coincidence is impossible for substances. Finally, the Substance Thesis is applied to personal identity, to show why persons cannot coincide and to refute a recent attempt to prove that they can.
This paper discusses the case of a sovereign elected by the people of a mythical state in order to safeguard and faithfully to transmit to his successors the people's sacred religious beliefs. After his election he enacts a law invalidating the election of heretics; but it turns out that he too fails under it. Is he validly elected? The question is left unanswered, but the source of the problem is discussed, various proposals are rejected, and both similarities to and differences from the Liar Paradox are noted.
This is a reply to the paper by Mark Johnston in The Journal of Philosophy 54 (1987): 59-83, entitled ‘Human Beings’. Johnston proposes a solution to various problems of personal identity based on the notion of the person as a ‘human being’, a kind essentially distinct from the kind ‘human organism’, which is a biological kind. It is argued that Johnston's idiosyncratic use of ‘human being’ is both ad hoc and obscure, and does not illuminate the debate over personal identity. Moreover, it can be seen that Johnston's position collapses into the sort of Parfitian reductionism with which he wishes to contrast it, and which he so rightly opposes.
In ‘Personal and Impersonal Identity’ (Mind 97 (1988): 29-49), Timothy Sprigge discusses reasons for a general suspicion of trans-temporal identity, and rejects what he says are the usual grounds given against the suspicion, providing instead his own reasons for rejecting it. He concludes that trans-temporal identity, including personal identity, is as genuine a case of identity as what he considers to be the paradigmatic case of identity. In this reply I take issue with some of the basic elements of Sprigge’s argument.
Philosophers have conceded too much to Kripke in their voluminous discussion of the ‘sceptical paradox’ he derives from Wittgenstein. There is a distinction between what a person ‘means’ at a given time and what he ‘will do’ at a later time. The speaker ‘must’ mean plus by ‘plus’, because he knowingly performs addition for x and y less than 57; he only ‘trivially’ means quus because his behaviour is trivially in accord with the possibility of his deviating in infinitely many ways. But the hypothesis that he non-trivially, or seriously means quus rather than plus is undermined by the very definition of that hypothesis.
'Perceptual Relativism' , Philosophia 16 (1986): 1-9
Perceptual relativism holds that the entities and properties of the world are relative to the constitution of the body perceiving them. This is held to be supported by the argument from differing perceptual apparatus (ADP). This paper seeks to expose the impossibility of setting up ADP in a non-self-defeating way. The conclusion is that the world is simply as humans perceive it, and that any differences between perceivers are to be explained empirically.