March 25, 2011

PAGE 14

 Further exploration of the theory, however, showed that it had many features uncongenial to Machianism. Some of these are connected with the necessity of imposing boundary conditions for the equation connecting the matter distribution of the space - time structure. General relativity certainly allows as solutions model universes of a non - Machian sort - for example, those which are aptly described as having the smoothed - out matter of the universe itself in ‘absolute rotation’. There are strong arguments to suggest that general relativity. Like Newton’s theory and like special relativity, requires the positing of a structure of ‘space - time itself’ and of motion relative to that structure, in order to account for the needed distinctions of kinds of motion in dynamics. Whereas in Newtonian theory it was ‘space itself’ that provided the absolute reference frames. In general relativity it is the structure of the null and time - like geodesics that perform this task. The compatibility of general relativity with Machian ideas is, however, a subtle matter and one still open to debate.
 Other aspects of the world described by the general theory of relativity argue for a substantivalist reading of the theory as well. Space - time has become a dynamic element of the world, one that might be thought of as ‘causally interacting’ with the ordinary matter of the world. In some sense one can even attribute energy (and hence mass) to the spacer - time (although this is a subtle matter in the theory), making the very distinction between ‘matter’ and ‘spacer - time itself’ much more dubious than such a distinction would have been in the early days of the debate between substantivalists and explanation forthcoming from the substantivalist account is.
 Nonetheless, a naive reading of general relativity as a substantivalist theory has its problems as well. One problem was noted by Einstein himself in the early days of the theory. If a region of space - time is devoid of non - gravitational mass - energy, alternative solutions to the equation of the theory connecting mass - energy with the space - time structure will agree in all regions outside the matterless ‘hole’, but will offer distinct space - time structures within it. This suggests a local version of the old Leibniz arguments against substantivalism. The argument now takes the form of a claim that a substantival reading of the theory forces it into a strong version of indeterminism, since the space - time structure outside the hld fails to fix the structure of space - time in the hole. Einstein’s own response to this problem has a very relationistic cast, taking the ‘real facts’ of the world to be intersections of paths of particles and light rays with one another and not the structure of ‘space - time itself’. Needless to say, there are substantival attempts to deal with the ‘hole’ argument was well, which try to reconcile a substantival reading of the theory with determinism.
 There are arguments on the part of the relationist to the effect that any substantivalist theory, even one with a distinction between absolute acceleration and mere relative acceleration, can be given a relationistic formulation. These relationistic reformations of the standard theories lack the standard theories’ ability to explain why non - inertial motion has the features that it does. But the relationist counters by arguing that the explanation forthcoming from the substantivalist account is too ‘thin’ to have genuine explanatory value anyway.
 Relationist theories are founded, as are conventionalist theses in the epistemology of space - time, on the desire to restrict ontology to that which is present in experience, this taken to be coincidences of material events at a point. Such relationist conventionalist account suffers, however, from a strong pressure to slide full - fledged phenomenalism.
 As science progresses, our posited physical space - times become more and more remote from the space - time we think of as characterizing immediate experience. This will become even more true as we move from the classical space - time of the relativity theories into fully quantized physical accounts of space - time. There is strong pressure from the growing divergence of the space - time of physics from the space - time of our ‘immediate experience’ to dissociate the two completely and, perhaps, to stop thinking of the space - time of physics for being anything like our ordinary notions of space and time. Whether such a radical dissociation of posited nature from phenomenological experience can be sustained, however, without giving up our grasp entirely on what it is to think of a physical theory ‘realistically’ is an open question.
 Science aims to represent accurately actual ontological unity/diversity. The wholeness of the spatiotemporal framework and the existence of physics, i.e., of laws invariant across all the states of matter, do represent ontological unities which must be reflected in some unification of content. However, there is no simple relation between ontological and descriptive unity/diversity. A variety of approaches to representing unity are available (the  formal - substantive spectrum and respective to its opposite and operative directions that the range of naturalisms). Anything complex will support man y different partial descriptions, and, conversely, different kinds of thing s many all obey the laws of a unified theory, e.g., quantum field theory of fundamental particles or collectively be ascribed dynamical unity, e.g., self - organizing systems.
 It is reasonable to eliminate gratuitous duplication from description - that is, to apply some principle of simplicity, however, this is not necessarily the same as demanding that its content satisfies some further methodological requirement for formal unification. Elucidating explanations till there is again no reason to limit the account to simple logical systemization: The unity of science might instead be complex, reflecting our multiple epistemic access to a complex reality.
 Biology provides as useful analogy. The many diverse species in an ecology nonetheless, each map, genetically and cognitively, interrelatable aspects of as single environment and share exploitation of the properties of gravity, light, and so forth. Though the somantic expression is somewhat idiosyncratic to each species, and the incomplete representation, together they form an interrelatable unity, a multidimensional functional representation of their collective world. Similarly, there are many scientific disciplines, each with its distinctive domains, theories, and methods specialized to the condition under which it accesses our world. Each discipline may exhibit growing internal metaphysical and nomological unities. On occasion, disciplines, or components thereof, may also formally unite under logical reduction. But a more substantive unity may also be manifested: Though content may be somewhat idiosyncratic to each discipline, and the incomplete representation, together the disciplinary y contents form an interrelatable unity, a multidimensional functional representation of their collective world. Correlatively, a key strength of scientific activity lies, not formal monolithicity, but in its forming a complex unity of diverse, interacting processes of experimentations, theorizing, instrumentation, and the like.
 While this complex unity may be all that finite cognizers in a complex world can achieve, the accurate representation of a single world is still a central aim. Throughout the history of physics. Significant advances are marked by the introduction of new representation (state) spaces in which different descriptions (reference frames) are embedded as some interrelatable perspective among many  thus, Newtonian to relativistic space - time perspectives. Analogously, young children  learn to embed two - dimensional visual perspectives in a three - dimensional space in which object constancy is achieved and their own bodies are but some among many. In both cases, the process creates constant methodological pressure for greater formal unity within complex unity.
 The role of unity in the intimate relation between metaphysics and metho in the investigation of nature is well - illustrated b y the prelude to Newtonian science. In the millennial Greco - Christian religion preceding the founder of modern astronomy, Johannes Kepler (1571 - 1630), nature was conceived as essentially a unified mystical order, because suffused with divine reason and intelligence. The pattern of nature was not obvious, however, a hidden ordered unity which revealed itself to a diligent search as a luminous necessity. In his Mysterium Cosmographicum, Kepler tried to construct a model of planetary motion based on the five Pythagorean regular or perfect solids. These were to be inscribed within the Aristotelian perfect spherical planetary orbits in order, and so determine them. Even the fact that space is a three - dimensional unity was a reflection of the one triune God. And when the observational facts proved too awkward for this scheme. Kepler tried instead, in his Harmonice Mundi, to build his unified model on the harmonies of the Pythagorean musical scale.
 Subsequently, Kepler trod a difficult and reluctant path to the extraction of his famous three empirical laws of planetary motion: Laws that made Newtonian revolution possible, but had none of the elegantly simple symmetries that mathematical mysticism required. Thus, we find in Kepler both the medieval methods and theories of metaphysically y unified religio - mathematical mysticism and those of modern empirical observation and model fitting. A transition figures in the passage to modern science.
 To appreciate both the historical tradition and the role of unity in modern scientific method, consider Newton’s methodology, focussing just on Newton’s derivation of the law of universal gravitation in Principia Mathematica, book iii. The essential steps are these: (1) The experimental work of Kepler and Galileo (1564 - 1642) is appealed to, so as to establish certain phenomena, principally Kepler’s laws of celestial planetary motion and Galileo’s terrestrial law of free fall. (2) Newton’s basic laws of motion are applied to the idealized system of an object small in size and mass moving with respect to a much larger mass under the action of a force whose features are purely geometrically determined. The assumed linear vector nature of the force allows construction of the centre of a mass frame, which separates out relative from common motions: It is an inertial frame (one for which Newton’s first law of motion holds), and the construction can be extended to encompass all solar system objects.
 (3) A sensitive equivalence is obtained between Kepler’s laws and the geometrical properties of the force: Namely, that it is directed always along the line of centres between the masses, and that it varies inversely as the square of the distance between them. (4) Various instances of this force law are obtained for various bodies in the heavens - for example, the individual planets and the moons of Jupiter. From this one can obtain several interconnected mass ratios - in particular, several mass estimates for the Sun, which can be shown to cohere mutually. (5) The value of this force for the Moon is shown to be identical to the force required by Galileo’s law of free fall at the Earth’s surface. (6) Appeal is made again to the laws of motion (especially the third law) to argue that all satellites and falling bodies are equally themselves sources of gravitational force. (7) The force is then generalized to a universal gravitation and is shown to explain various other phenomena - for example, Galileo’s law for pendulum action is shown suitably small, thus leaving the original conclusions drawn from Kepler’s laws intact while providing explanations for the deviations.
 Newton’s constructions represent a great methodological, as well as theoretical achievement. Many other methodological components besides unity deserve study in their own right. The sense of unification is here that a deep systemization, as given the laws of motion, the geometrical form of the gravitational force and all its significant parameters needed for a complete dynamical description - that is, the component G, of the geometrical form of gravity Gm1m2/rn, - are uniquely determined from phenomenons and, after the of universal gravitation has been derived, it plus the laws of motion determine the space and time frames and a set of self - consistent attributions of mass. For example, the coherent mass attributions ground the construction of the locally inertial ventre of a mass frame, and Newton’s first law then enables us to consider time as a magnitude e: Equal tomes are those during which a freely moving body transverses equal distances. The space and time frames in turn ground use of the laws of motion, completing the constructive circle. This construction has a profound unity to it, expressed by the multiple interdependency of its components, the convergence of its approximations, and the coherence of its multiplying determined quantized. Newton’s Rule IV: (Loosely) do not introduce a rival theory unless it provides an equal or superior unified construction - in particular, unless it is able to measure its parameters in terms of empirical phenomena at least as thorough and cross - situationally invariably (Rule III) as done in current theory. this gives unity a central place in scientific method.
 Kant and Whewell seized on this feature as a key reason for believing that the Newtonian account had a privileged intelligibility and necessity. Significantly, the requirement to explain deviations from Kepler’s laws through gravitational perturbations has its limits, especially in the cases of the Moon and Mercury: These need explanations. The former through the complexities of n - body dynamics (which may even show chaos) and the latter through relativistic theory. Today we no longer accept the truth, let alone the necessity, of Newton’s theory. Nonetheless, it remains a standard of intelligibility. It is in this role that it functioned, not jus t for Kant, but also for Reichenbach, and later Einstein and even Bohr: Their sense of crisis with regard to modern physics and their efforts to reconstruct it is best seen as stemming from their acceptance of an essential recognition of the falsification o this ideal by quantum theory. Nonetheless, quantum theory represents a highly unified, because symmetry - preserving, dynamics, reveals universal constants, and satisfies the requirement of coherent and invariant parameter determinations.
 Newtonian method provides a central, simple example of the claim that increased unification brings increased explanatory power. A good explanation increases our understanding of the world. And clearly a convincing story an do this.  Nonetheless, we have also achieved great increases in our understanding of the world through unification. Newton was able to unify a wide range of phenomena by using his three laws of motion together with his universal law of gravitation. Among other things he was able to account for Johannes Kepler’s three was of planetary motion, the tides, the motion of the comets, projectile motion and pendulums. Still, his laws of planetary motion are the first mathematical, scientific, laws of astronomy of the modern era. They state (1) that the planets travel in elliptical orbits, with one focus of the ellipse being the sun. (2) That the radius between sun and planets sweeps equal areas in equal time, and (3) that the squares of the periods of revolution of any two planets are in the same ratio as the cube of their mean distance from the sun.
 we have explanations by reference of causation, to identities, to analogies, to unification, and possibly to other factors, yet philosophically we would like to find some deeper theory that explains what it was about each of these apparently diverse forms of explanation that makes them explanatory. This we lack at the moment. Dictionary definitions typically explicate the notion of explanation in terms of understanding: An explanation is something that gives understanding or renders something intelligible. Perhaps this is the unifying notion. The different types of explanation are all types of explanation in virtue of their power to give understanding. While certainly an explanation must be capable of giving an appropriately tutored person a psychological sense of understanding, this is not likely to be a fruitful way forward. For there is virtually no limit to what has been taken to give understanding. Once upon a time, many thought that the facts that there were seven virtues and seven orifices of the human head gave them an understanding of why there were (allegedly) only seven planets. we need to distinguish between real and spurious understanding. And for that we need a philosophical theory of explanation that will give us the hall - mark of a good explanation.
 In recent years, there has been a growing awareness of the pragmatic aspect of explanation. What counts as a satisfactory explanation depends on features of the context in which the explanation is sought. Willy Sutton, the notorious bank robber, is alleged to have answered a priest’s question, ‘Why do you rob banks’? By saying ‘That is where the money is’, we need to look at the context to be clear about for what exactly of an explanation is being sought. Typically, we are seeking to explain why something is the case than something else. The question which Willy’s priest probably had in mind was: ‘Why do you rob banks rather than have a socially worthwhile jobs’? And not the question ‘Why do you rob banks rather than have a socially worthwhile jobs’? And not the question ‘Why do you rob banks rather than churches’? we also need to attend to the background information possessed by the questioner. If we are asked why a certain bird has a long beaks, it is no use answering (as the D - N approach might seem to license) that the birds are an Aleutian tern and all Aleutian terns have long beaks if the questioner already knows that it is an Aleutian tern. A satisfactory answer  typically provides new information. In this case, however, the speaker may be looking for some evolutionary account of why that species has evolved long beaks. Similarly, we need to attend to the level of sophistication in the answer to be given. we do not provide the same explanation of some chemical phenomena to a school child as to a student of quantum chemistry.
 Van Fraassen whose work has been crucially important in drawing attention to the pragmatic aspects of exaltation has gone further in advocating a purely pragmatic theory of explanation.  A crucial feature of his approach is a notion of relevance. Explanatory answers to ‘why’ questions must be relevant but relevance itself is a function of the context for van Fraassen. For that reason he has denied that it even makes sense to talk of the explanatory power of a theory. However, his critics (Kitcher and Salmon) pint out that his notion of relevance is unconstrained, with the consequence that anything can explain anything. This reductio can be avoided only by developing constraints on the relation of relevance, constraints that will not be a functional forming context, hence take us away from a purely pragmatic approach to explanation.
 The resolving result is increased explanatory power for Newton’s theory because of the increased scope and robustness of its laws, since the data pool which now supports them is the largest and most widely accessible, and it brings its support to bear on a single force law with only two adjustable, multiply determined parameters (the masses). Call this kind of unification (simpler than full constructive unification) ‘coherent unification’. As much has been made of these ideas in recent philosophy of method, representing something of a resurgence of the Kant - Whewell tradition.
 Unification of theories is achieved when several theories T1, T2, . . . Tn previously regarded s distinct are subsumed into a theory of broader scope T*. Classical examples are the unification of theories of electricity, magnetism, and light into Maxwell’s theory of electrodynamics. And the unification of evolutionary and genetic theory in the modern synthetic thinking.
 In some instances of unification, T* logically entails T1, T2, . . . Tn under particular assumptions. This is the sense in which the equation of state for ideal gases: pV = nRT, is a unification of Boyle’s law, pV = constant for constant temperature, and Charle’s law, V/T = constant for constant pressure. Frequently, however, the logical relations between theories involve in unification are less straightforward. In some cases, the claims of T* strictly contradict the claim of T1, T2, . . . Tn. For instance, Newton’s inverse - square law of gravitation is inconsistent with Kepler’s laws of planetary motion and Galileo’s law of free fall, which it is often said to have unified. Calling such an achievement ‘unification’ may be justified by saying that T* accounts on its own for the domains of phenomena that had previously been treated by T1, T2, . . . Tn. In other cases described as unification, T* uses fundamental concepts different from those of T1, T2, . . . Tn so the logical relations among them are unclear. For instance, the wave and corpuscular theories of light are said to have been unified in quantum theory, but the concept of the quantum particle is alien to classical theories. Some authors view such cases not as a unification of the original T1, T2, . . . Tn, but as their abandonment and replacement by a wholly new theory T* that is incommensurable with them.
 Standard techniques for the unification of theories involve isomorphism and reduction. The realization that particular theories attribute isomorphic structures to a number of different physical systems may point the way to a unified theory that attributes the same structure to all such systems. For example, all instances of wave propagation are described by the wave equation:
     ∂2y/∂x2 = (∂2y/∂t2)/v2
Where the displacement y is given different physical interpretations in different instances. The reduction of some theories to a lower - level theory, perhaps through uncovering the micro - structure of phenomena, may enable the former to be unified into the latter. For instance, Newtonian mechanics represent a unification of many classical physical theories, extending from statistical thermodynamics to celestial mechanics, which portray physical phenomena as systems of classical particles in motion.
 Alternative forms of theory unification may be achieved on alternative principles. A good example is provided by the Newtonian and Leibnizian programs for theory unification. The Newtonian program involves analysing all physical phenomena as the effects of forces between particles. Each force is described by a causal law, modelled on the law of gravitation. The repeated application of these laws is expected to solve all physical problems, unifying celestial mechanics with terrestrial dynamics and the sciences of solids and of fluids. By contrast, the Leibnizian program proposes to unify physical science on the basis of abstract and fundamental principles governing all phenomena, such as principles of continuity, conservation, and relativity. In the Newtonian program, unification derives from the fact that causal laws of the same form apply to every event in the universe: In the Leibnizian program, it derives from the fact that a few universal principles apply to the universe as a whole. The Newtonian approach was dominant in the eighteenth and nineteenth centuries, but more recent strategies to unify physical sciences have hinged on or upon the formulating universal conservation and symmetry principles reminiscent of the Leibnizian program.
 There are several accounts of why theory unification is a desirable aim. Many hinge on simplicity considerations: A theory of greater generality is more informative than a set of restricted theories, since we need to gather less information about a state of affairs in order to apply the theory to it. Theories of broader scope are preferable to theories of narrower scope in virtue of being more vulnerable to refutation. Bayesian principles suggest that simpler theories yielding the same predictions as more complex ones derive stronger support from common favourable evidence: On this view, a single general theory may be better confirmed than several theories of narrower scope that are equally consistent with the available data.
 Theory unification has provided the basis for influential accounts of explanation. According to many authors, explanation is largely a matter of unifying seemingly independent instances under a generalization. As the explanation of individual physical occurrences is achieved by bringing them within th scope of a scientific theory, so the explanation of individual theories is achieved by deriving them from a theory of a wider domain. On this view, T1, T2, . . . Tn, are explained by being unified into T*.
 The question of what theory unification reveals about the world arises in the debate between scientific realism and instrumentals. According to scientific realists, the unification of theories reveals common causes or mechanisms underlying apparently unconnected phenomena. The comparative case with which scientists interpretation, realists maintain, but can be explained if there exists a substrate underlying all phenomena composed of real observable and unobservable entities. Instrumentalists provide a mythological account of theory unification which rejects these ontological claims of realism and instrumentals.
 Arguments in a like manner, are of statements which purported provides support for another. The statements which purportedly provide the support are the premises while the statement purportedly supported is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deduction arguments purportedly provide conclusive arguments purportedly provide any probable support. Some, but not all, arguments succeed in supporting arguments, successful in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its ptr=muses are true then its conclusion must be true. An argument is strong just in case if all its premises are true its conclusion is only probable. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas inductive logic provides methods for ascertaining the degree of support the premiss of an argument confer on its conclusion.
 The argument from analogy is intended to establish our right to believe in the existence and nature of ‘other minds’, it admits that it is possible that the objects we call persona are, other than themselves, mindless automata, but claims that we nonetheless have sufficient reason for supposing this are not the case. There is more evidence that they cannot mindless automata than that they are:
The classic statement of the argument comes from J.S. Mill. He wrote:
 I am conscious in myself of a series of facts connected by an
 uniform sequence, of which the beginning is modification
 of my body, the middle, in the case of other human beings, I have
 the evidence of my senses for the first and last links of the series, but not for the intermediate link. I find, however, that the sequence
 Between the first and last is regular and constant in the other
 cases as it is in mine. In my own case I know that the first link produces the last through the intermediate link, and could not produce it without. Experience, therefore, obliges me to conclude that there must
 be an intermediate link, which must either be the same in others
 as in myself, or a different one, . . . by supposing the link to be of the Same nature . . . I confirm to the legitimate rules of experimental enquiry.
As an inductive argument this is very weak, because it is condemned to arguing from a single case. But to this we might reply that nonetheless, we have more evidence that there is other minds than that there is not.
 The real criticism of the argument is due to the Austrian philosopher Ludwig Wittgenstein (1889 - 1951). It is that the argument assumes that we at least understand the claims that there are subjects of experience other than themselves, who enjoy experiences which are like ours but not ours: It only asks what reason we have to suppose that claim true. But if the argument does indeed express the ground of our right to believe in the existence of others. It is impossible to explain how we are able to achieve that understanding. So if there is a place for argument from analogy, the problem of other minds - the real, hard problem, which is how we acquire a conception of another mind - is insoluble. The argument is either redundant or worse.
 Even so, the expression ‘the private language argument’ is sometimes used broadly to refer to a battery of arguments in Wittgenstein’s ‘Philosophical Investigations’, which are concerned with the concepts of, and relations between, the mental and its behavioural manifestations (the inner and the outer), self - knowledge and knowledge of other’s mental states. Avowals of experience and description of experiences. It is sometimes used narrowly to refer to a single chain of argument in which Wittgenstein demonstrates the incoherence of the idea that sensation names and names of experiences given meaning by association with a mental ‘object’, e.g., the word ‘pain’ by association with the sensation of pain, or by mental (private) ‘ostensive definition’. In which a mental ‘entity’ supposedly functions as a sample, e.g., a mental image, stored in memory y, is conceived as providing a paradigms for the application of the name.
 A ‘private language’ is not a private code, which could be cracked by another person, nor a language spoken by only one person, which could be taught to others, but a putative language, the individual words of which refer to what can (apparently) are known only by the speaker, i.e., to his immediate private sensations or, to use empiricist jargon, to the ‘ideas’ in his mind. It has been a presupposition of the mainstream of modern philosophy, empiricist, rationalist and Kantian alike, of representationalism that the languages we speak are such private languages, that the foundations of language no less than the foundations of knowledge lie in private experience. To determine this picture with all its complex ramifications is the purpose of Wittgenstein’s private arguments.
 There are various ways of distinguishing types of foundationalist epistemology, whereby Plantinga (1983) has put forward an influential conception of ‘classical foundationalism’, specified in terms of limitations on the foundations. He construes this as a disjunction of ancient and medieval foundationalism’, which takes foundations to comprise what is self - evident and ‘evident to the senses’ and ‘modern foundationalism’, that replaces ‘evidently to the senses’ with ‘incorrible’, which in practice what taken to apply to beliefs about one’s present states of consciousness. Plantinga himself developed this notion in the context of arguing that items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously called ‘strong’ or ‘extreme’ foundationalism and ‘moderate’ or ‘minimal’ foundationalism, with the distinction depending on whether various epistemic immunities are required of foundations. Finally, ‘simple’ and ‘iterative’ foundationalism are dependent on whether it is required of as foundations only that it is immediately justified, or whether it is also required that the higher level belief that the former belief is immediately justified is itself immediately justified.
 However, classic opposition is between foundationalism and coherentism. Coherentism denies any immediate justification. It deals with the regress argument by rejecting ‘linear’ chains of justification and, in effect, taking the total system of belief to be epistemically primary. A particular belief is justified to the extent that it is integrated into a coherent system of belief. More recently, ‘pragmatists’ like American educator, social reformer and philosopher of pragmatism John Dewey (1859 - 1952), have developed a position known as contextualism, which avoids ascribing any overall structure to knowledge. Questions concerning justification can only arise in particular context, defined in terms of assumptions that are simply taken for granted, though they can be questioned in other contexts, where other assumptions will be privileged.
 Meanwhile, it is, nonetheless, the idea that the language each of us speaks is essentially private, that leaning a language is a matter of associating words with, or ostensibly defining words by reference to, subjective experience (the ‘given’), and that communication is a matter of stimulating a pattern of associations in the mind of the hearer qualitatively identical with what in the mind of the speaker is linked with multiple mutually supporting misconceptions about language, experiences and their identity, the mental and its relation to behaviour, self - knowledge and knowledge of the states of minds of others.
 1. The idea that there can be such a thing as a private language is one manifestation of a tactic committed to what Wittgenstein called ‘Augustine’s picture of language’ - pre - theoretical picture according to which the essential function of words is to name items in reality, that the link between word and world is affected by ‘ostensive definition’, and describe a state of affairs. Applied to the mental, this knows that what a psychological predicate such as ‘pain’ means if one knows, is acquainted with, what it stands for - a sensation one has. The word ‘pain’ is linked to the sensation it names by way of private ostensive definition, which is affected by concentration (the subjective analogue of pointing) on the sensation and undertaking to use the word of that sensation. First - person present tense psychological utterances, such as ‘I have a pain’ are conceived to be descriptions which the speaker, as it was, reads off the facts which are private accessibility to him.
 2. Experiences are conceived to be privately owned and inalienable - no on else can have my pain, but not numerically, identical with mine. They are also thought to be epistemically private - only I really know that what I have is a pain, others can at best only believe or surmise that I am in pain.
 3. Avowals of experience are expressions of self - knowledge. When I have an experience, e.g., a pain, I am conscious or aware that I have by introspection (conceived as a faculty of inner sense). Consequently, I have direct or immediate knowledge of my subjective experience. Since no one else can have what I have, or peer into my mind, my access is privileged. I know, and an certain, that I have a certain experience whenever I have it, for I cannot doubt that this, which I now have, in a pain.
 4. One cannot gain introspective access to the experience of others, so one can obtain only indirect knowledge or belief about them. They are hidden behind the observable, behaviour, inaccessible to direct observation, and inferred either analogically. Whereby, this argument is intended to establish our right to believe in the existence and nature of other minds, it admits it is possible that the objects we call persons are, other than ourselves, mindless automata, but claims that we nonetheless, have sufficient reason for supposing this not to be the case. There is more evidence that they are not mindless automata than they are.
 The real criticism of the argument is du e to Wittgenstein (1953). It is that the argument assumes that we at least understand the claims that there are subjects of experience other than ourselves, who enjoy experiences which are like ours but not ours: It only asks what reason we have to suppose that claim true. But if the argument does indeed express the ground of our right to believe in the existence of others, it is impossible to explain how we are able to achieve that understanding. So if there is a place for argument from analogy, the problem of other minds - the real, hard problem, which is how we acquire a conception of another mind - is insoluble. The argument is either redundant or worse.
 Even so, the inference to the best explanation is claimed by many to be a legitimate form of non - deductive reasoning, which provides an important alternative to both deduction and enumerative induction. Indeed, some would claim that it is only through reasoning to the best explanation that one can justify beliefs about the external world, the past, theoretical entities in science, and even the future. Consider belief about the external world and assume that we know what we do about the external world through our knowledge of the subjective and fleeting sensations. It seems obvious that we cannot deduce any truths about the existence of physical objects from truths describing the character of our sensations. But either can we observe a correlation between sensations and something other than sensations since by hypothesis all we ever nave to rely on ultimately is knowledge of our sensations. Nevertheless, we may be able to posit physical objects as the best explanation for the character and order of our sensations. In the same way, various hypotheses about the past, might best be explained by present memory: Theoretical postulates in physics might best explain phenomena in the macro - world. And it is even possible that our access to the future to explain past observations. But what exactly is the form of an inference to the best explanation? However, if we are to distinguish between legitimate and illegitimate reasoning to the best explanation it would seem that we need a more sophisticated model of the argument form. It would seem that in reasoning to an explanation we need ‘criteria’ for choosing between alternative explanation. If reasoning to the best explanation is to constitute a genuine alterative to inductive reasoning, it is important that these criteria not be implicit premises which will convert our argument into an inductive argument
 However, in evaluating the claim that inference to best explanation constitutes a legitimate and independent argument form, one must explore the question of whether it is a contingent fact that at least most phenomena have explanations and that explanations that satisfy a given criterion, simplicity, for example, is more likely to be correct and writers of texts, if the universe structure in such a way that simply, powerful, familiar explanations were usually the correct explanation. It is difficult to avoid the conclusion that this is true, but It would be an empirical fact about our universe discovered only a posterior. If the reasoning to the best explanation relies on such criteria, it seems that one cannot without circularity use reasoning to the best explanation to discover that the reliance on such criteria is safe. But if one has some independent was of discovering that simple, powerful, familiar explanations are more often correct, then why should we think that reasoning of the best explanation is an independent source of information about the world? Indeed, why should we not conclude that it would be more perspicuous to represent the reasoning this way. That is, simply an instance of familiar inductive reasoning.
 5. The observable behaviour from which we thus infer consists of bare bodily movements caused by inner mental events. The outer (behaviour) are not logically connected with the inner (the mental). Hence, the mental are essentially private, known ‘strictu sensu’, only to its owner, and the private and subjective is better known than the public.
 The resultant picture leads first to scepticism then, ineluctably to ‘solipsism’. Since pretence and deceit are always logically possible, one can never be sure whether another person is really having the experience behaviourally appears to be having. But worse, if a given psychological predicate means ‘this’ (which I have no one else could logically have - since experience is inalienable), then any other subjects of experience. Similar scepticism about defining samples of the primitive terms of a language is private, then I cannot be sure that what you mean by ‘red’ or ‘pain’ is not quantitatively identical with what I mean by ‘green’ or ‘pleasure’. And nothing can stop us frm concluding that all languages are private and strictly mutually unintelligible.
 Philosophers had always been aware of the problematic nature of knowledge of other minds and of mutual intelligibly of speech of their favour red picture. It is a manifestation of Wittgenstein’s genius to have launched his attack at the point which seemed incontestable - namely, not whether I can know of the experiences of others, but whether I can understand the ‘private language’ of another in attempted communication, but whether I can understand my own allegedly private language.
 The functionalist thinks of ‘mental states’ and events as causally mediating between a subject’s sensory inputs and that subject’s ensuing behaviour that what makes a mental state the doctrine that what makes a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - is the functional relation it bears to the subject’s perceptual stimuli it beards to the subject’s perceptual stimuli, behavioural responses and other mental states. That’s not to say, that, functionalism is one of the great ‘isms’ that have been offered as solutions to the mind/body problem. The cluster of questions that all of these ‘isms’ promise to answer can be expressed as: What is the ultimate nature of the mental? At the most overall level, what makes a mental state mental? At the more specific level that has been the focus in recent years: What do thoughts have in common in virtue of which they are thoughts? That is, what makes a thought a thought? What makes a pain a pain? Cartesian Dualism said the ultimate nature of the mental of the mental was said the ultimate nature of the mental was to be found in a special mental substance. Behaviouralism identified mental states with behavioural disposition: Physicalism in its most influential version identifies mental states with brain states. Of course, the relevant physical state s are various sorts of neutral states. Our concepts of mental states such as thinking, and feeling are of course different from our concepts of neural states, of whatever.
 Disaffected by Cartesian dualism and from the ‘first - person’ perspective of introspective psychology, the behaviouralists had claimed that there is nothing to the mind but the subject’s behaviour and disposition to behave equally well against the behavioural betrayal, behaving just as pain - free human beings, would be the right sort of case. For example, for Rudolf to be in pain is for Rudolf to be either behaving in a wincing - groaning - and - favouring way or disposed to do so (in that not keeping him from doing so): It is nothing about Rudolf’s putative inner life or any episode taking place within him.
 Though behaviourism avoided a number of nasty objects to dualism (notably Descartes’ admitted problem of mind - body interaction), some theorists were uneasy, they felt that it its total repudiation of the inner, behaviourism was leaving out something real and important. U.T. Place spoke of an ‘intractable residue’ of conscious mental items that bear no clear relations to behaviour of any particular sort. And it seems perfectly possible for two people to differ psychologically despite total similarity of their actual and counter - factual behaviour, as in a Lockean case of ‘inverted spectrum’: For that matter, a creature might exhibit all the appropriate stimulus - response relations and lack mentation entirely.
 For such reasons, Place and the Cambridge - born Australian philosopher J.J.C. Smart proposed a middle way, the ‘identity theory’, which allowed that at least some mental states and events are genuinely inner and genuinely episodic after all: They are not to be identified with outward behaviour or even with hypothetical disposition to behave. But, contrary to dualism, the episodic mental items are not ghostly or non - physical either. Rather, they are neurophysiological of an experience that seems to resist ‘reduction’ in terms of behaviour. Although ‘pain’ obviously has behavioural consequences, being unpleasant, disruptive and sometimes overwhelming, there is also something more than behaviour, something ‘that it is like’ to be in pain, and there is all the difference in the world between pain behaviour accompanied by pain and the same behaviour without pain. Theories identifying pain with neural events subserving it have been attacked, e.g., Kripke, on the grounds that while a genuine metaphysical identity y should be necessarily true, the association between pain and any such events would be contingent.
 Nonetheless, the American philosopher’s Hilary Putnam (1926 - ) and American philosopher of mind Alan Jerry Fodor (1935 - ), pointed out a presumptuous implication of the identity theory understood as a theory of types or kinds of mental items: That a mental type such s pain has always and everywhere the neurophysiological characterization initially assigned to it. For example, if the identity theorist identified pain itself with the firing of c - fibres, it followed that a creature of any species (earthly or science - fiction) could be in pain only if that creature had c - fibres and they were firing. However, such a constraint on the biology of any being capable of feeling pain is both gratuitous and indefensible: Why should we suppose that any organism must be made of the same chemical materials as us in order to have what can be accurately recognized pain? The identity theorists had overreacted to the behaviourists’ difficulties and focussed too narrowly on the specifics of biological humans’ actual inner states, and in doing so, they had fallen into species chauvinism.
 Fodor and Putnam advocated the obvious correction: What was important, were no t being c - fibres (per se) that were firing, but what the c - fibres was doing, what their firing contributed to the operation of the organism as a whole? The role of the c - fibres could have been preformed by any mechanically suitable component s long as that role was performed, the psychological containment for which the organism would have been unaffected. Thus, to be in pain is not per se, to have c - fibres that are firing, but merely to be in some state or other, of whatever biochemical description that play the same functional role as did that plays the same in the human beings the firing of c - fibres in the human being. we may continue to maintain that pain ‘tokens’, individual instances of pain occurring in particular subjects at particular neurophysiological states of these subjects at those times, throughout which the states that happed to be playing the appropriate roles: This is the thesis of ‘token identity’ or ‘token physicalism’. But pan itself (the kind, universal or type) can be identified only with something mor e abstract: th e caudal or functional role that c - fibres share with their potential replacements or surrogates. Mental state - and identified not with neurophysiological types but with more abstract functional roles, as specified by ‘stare - tokens’ relations to the organism’s inputs, outputs and other psychological states.
 Functionalism has in itself the distinct souses for which Putnam and Fodor saw mental states in terms of an empirical computational theory of the mind, also, Smart’s ‘topic neutral’ analyses led Armstrong and Lewis to a functional analysis of mental concepts. While Wittgenstein’s idea of meaning as use led to a version of functionalism as a theory of meaning, further developed by Wilfrid Sellars (1912 - 89) and later Harman.
 One motivation behind functionalism can be appreciated by attention to artefact concepts like ‘carburettor’ and biological concepts like ‘kidney’. What it is for something to be a carburettor is for it to mix fuel and air in an internal combustion engine, and carburettor is a functional concept. In the case of ‘kidney’, the scientific concept is functional - defined in terms of a role in filtering the blood and maintaining certain chemical balances.
 The kind of function relevant to the mind can be introduced through the parity - detecting automaton, wherefore according to functionalism, all there is to being in pain is having to say ‘ouch’, wonder whether you are ill, and so forth. Because mental states in this regard, entail for its method for defining automaton states is supposed to work for mental states as well. Mental states can be totally characterized in terms that involve only logico - mathematical language and terms for input signals and behavioural outputs. Thus, functionalism satisfied one of the desiderata of behaviourism, characterized the mental in entirely non - mental language.
 Suppose we have a theory of mental states that specify all the causal relations among the stats, sensory inputs and behavioural outputs. Focussing on pain as a sample, mental state, it might say, among other things, that sitting on a tack causes pain an that pain causes anxiety and saying ‘ouch’. Agreeing for the sake of the example, to go along with this moronic theory, functionalism would then say that could define ‘pain’ as follows: Bing in pain - being in the first of two states, the first of which is causes by sitting on tacks, and which in turn cases the other state and emitting ‘ouch’. More symbolically:
   Being in pain = Being an x such that ∃
   P ∃ Q[sitting on a tack cause s P and P
   causes both Q and emitting ‘ouch; and
   x is in P]
More generally, if T is a psychological theory with ‘n’ mental terms of which the seventeenth is ‘pain’, we can define ‘pain’ relative to T as follows (the ‘F1' . . . ‘Fn’ are variables that replace the ‘n’ mental terms):
   Being in pain = Being an x such that ∃
   F1 . . . Fn[T(F1 . . . Fn) & x is in F17]
The existentially quantified part of the right - hand side before the ‘&’ is the Ramsey sentence of the theory ‘T’. In this ay, functionalism characterizes the mental in non - mental terms, in terms that involve quantification over realization of mental states but no explicit mention of them: Thus, functionalism characterizes the mental in terms of structures that are tacked down to reality only at the inputs and outputs.
 The psychological theory ‘T’ just mentioned can be either an empirical psychological theory or else a common - sense ‘folk’ theory, and the resulting functionalisms are very different. In the former case, which is named ‘psychofunctionalism’. The functional definitions are supposed to fix the extensions of mental terms. In the latter case, conceptual functionalism, the functional definitions are aimed at capturing our ordinary mental concepts. (This distinction shows an ambiguity in the original question of what the ultimate nature of the mental is.) The idea of psychofunctionalism is that the scientific nature of the mental consists not in anything biological, but in something ‘organizational’, analogous to computational structure. Conceptual functionalism, by contrast, can be thought of as a development of logical behaviouralism. Logical behaviouralisms thought that pain was a disposition to pan behaviour. But as the Polemical British Catholic logician and moral philosopher Thomas Peter Geach (1916 - ) and the influential American philosopher and teacher Milton Roderick Chisholm (1916 - 99) pointed out, what counts as pain behaviour depends on the agent’s belief and desires. Conceptual functionalism avoid this problem by defining each mental state in terms of its contribution to dispositions to behave - and have other mental states.
 The functional characterization is given to assume a psychological theory with a finite number of mental state terms. In the case of monadic states like pain, the sensation of red, and so forth. It does seem a theoretical option to simply list the states and the=ir relations to other states, inputs and outputs. But for a number of reasons, this is not a sensible theoretical option for belief - states, desire - states, and other propositional - attitude states. For on thing, the list would be too long to be represented without combinational methods. Indeed, there is arguably no upper bound on the number of propositions anyone which could in principle be an object of thought. For another thing, there are systematic relations among belies: For example, the belief that ‘John loves Mary’. Ann the belief that ‘Mary loves John’.  These belief - states represent the same objects as related to each other in converse ways. But a theory of the nature of beliefs can hardly just leave out such an important feature of them. We cannot treat ‘believes - that - grass - is - green’, ‘believes - that - grass - is - green], and so forth, as unrelated’, as unrelated primitive predicates. So we will need a more sophisticated theory, one that involves some sort of combinatorial apparatus. The most promising candidates are those that treat belief as a relation. But a relation to what? There are two distinct issues at hand. One issue is how to formulate the functional theory, for which our acquiring of knowledge - that acquires knowledge - how, abilities to imagine and recognize, however, the knowledge acquired can appear in embedded as contextually represented. For example, reason commits that if this is what it is like to see red, then this similarity of what it is like to see orange, least of mention, that knowledge has the same problem as to infer that non - cognitive analysis of ethical language have in explaining the logical behaviour of ethical predicates. For a suggestion in terms of a correspondence between the logical relations between sentences and the inferential relations among mental states. A second issue is that types of states could possibly realize the relational propositional attitude states. Fodor (1987) has stressed the systematicity of propositional attitudes and further points out that the beliefs whose contents are systematically related exhibit th e following sort of empirical relation: If one is capable of believing that Mary loves John, one is also capable of believing that John love Mary. Jerry Fodor argues that only a language of thought in the brain could explain this fact.
 Jerry Alan Fodor (1935 - ), an American philosopher of mind who is well known for a resolute realism about the nature of mental functioning. Taking the analogy between thought and computation seditiously. Fodor believes that mental representations should be conceived as individual states with their own identities and structure, like formulae transformed by processes of computation or those of the ‘Holist’ such as Donald Herbert Davidson (1917 - 2003) or, ‘instrumentalists about mental ascriptions, such as Daniel Clement Dennett (1952). In recent years he has become a vocal critic of some of the aspirations of cognitive science, literaturizing such books as ‘Language of Thought’ (1975, ‘The Modularity of Mind (1983), ‘Psychosemantics (1987), ‘The Elm and the Expert(1994), ‘Concepts: Where Cognitive Science went Wrong’ (1998), and ‘Hume Variations ‘(2003).
 Purposively, ‘Folk psychology’ is primarily ‘intentional explanation’: It’s the idea that people’s behaviour can be explained b yy reference to the contents of their beliefs and desires. Correspondingly, the method - logical issue is whether intentional explanation can be co - opted to make science out of. Similar questions might be asked about the scientific potential of other folk - psychological concepts (consciousness for example), but, what make s intentional explanation problematic is that they presuppose that there are intentional states. What makes intentional states problematic is that they exhibit a pair of properties assembled in the concept of ‘intentionality’, in its current use the expression ‘intentionality refers to that property of the mind by which it is directed at, about, or of objects and stat es of affairs in the world. Intentionality, so defined, includes such mental phenomena as belief, desire, intention, hope, fear, memory, hate, lust, disgust, and memory as well as perception and intentional action, however, there is in remaining that of:
 (1) Intentional states have causal powers. Thoughts (more precisely, having of thoughts) make things happen: Typically, thoughts make behaviour happen. Self - pit y can make one weep, as can onions.
 (2) Intentional states are semantically evaluable, beliefs, for example, area about how things are and are therefore true or false depending on whether things are the way that they are believed to be. Consider, by contrast, tables, chairs, onions, and the cat’s being on the mat. Though they all have causal powers they are not about anything and are therefore not evaluable as true or false.
 If there is to be an intentional science, there must be semantically evaluable things that have causal powers. Moreover, there must be laws about such things, including, in particular, laws that relate beliefs and desires to one another and to actions.  If there are no intentional laws, then there is no intentional science. Perhaps, scientific explanation is not always explanation by law subsumption, but surely if often is, and there is no obvious reason why an intentional science should be exceptional in this respect. Moreover, one of the best reasons for supposing that common sense is right about there being intentional states is precisely that there seem to be many reliable intentional generalizations for such states to fall under. It is for us to assume that many of the truisms of folk psychology either articulate intentional laws or come pretty close doing so.
 So, for example, it is a truism of folk psychology that rote repetition facilitates recall. (Moreover, and most generally, repetition improves performance ‘How do you get to Carnegie Hall’?) This generalization relates the content to what you learn to the content of what you say to yourself while you are learning it: So, what it expresses, is, ‘prima facie’, a lawful causal relation between types of intentional states. Real psychology y has lots more to say on this topic, but it is, nonetheless, much more of the same. To a first approximation, repetition does causally facilitate recall, and that it does is lawful.
 There are, to put it mildly, many other case of such reliable intentional causal generalizations. There are also many, many kinds of folk psychological generalizations about ‘correlations’ among intentional states, and these to are plausible candidates for flushing out as intentional laws. For example that anyone who knows what 7 + 5 is also to know what 7+ 6 is: That anyone who knows what ‘John love’s Mary’ means who knows what ‘Mary love’s John’ means, and so forth.
 Philosophical opinion about folk psychological intentional generalizations runs the gamut from ‘there are not any that are really reliable’ to.  They are all platitudinously true, hence not empirical at all. Nevertheless, suffice to say, that the necessity of ‘if 7 +5 = 12 then 7 + 6 =13' is quite compatible with the ‘contingency’ of ‘if someone knows that 7 + 5 = 12, then he knows that 7 + 6 =13: And, then, part of the question ‘how can there be an intentional science’ is ‘how can there be an intentional practice of law’?
 Let us assume most generally, that laws support counter-factuals and are confirmed by their instances. Further, to assume that every law is either basic or not. Basic laws are either exceptionless or intractably statistical. The only basic laws are laws of basic physics.
 All Non - basic laws, including the laws of all the Non - basic sciences, including, in particular, the intentional laws of psychology, are ‘c[eteris] p[aribus] laws: They hold only ‘all else being equal’. There is - anyhow.  There ought to be that a whole department of the philosophy of science devoted to the construal of cp laws: To making clear, for instances, how they can be explanatory, how they can support counter-factuals, how they can subsume the singular causal truths that instance them . . . and so forth. Omitting only these issues in what gives presence to the future, is, because they do not belong to philosophical psychology as such. If the laws of intentional psychology is a special, I, e., Non - basic science. Not because it is an intentional science.
 There is a further quite general property that distinguishes cp laws from basic ones: Non - basic laws want mechanisms for their implementation. Suppose, for a working example, that some special science states that being ‘F’ causes xs to be ‘G’. (Being irradiated by sunlight causes plants to photo - synthesize, as for being freely suspended near the earth’s surface causes bodies to fall with uniform accelerating, and so on.) Then it is a constraint on this generalization’s being lawful that ‘How does, being ‘F’ cause’s xs to be ‘G’? There must be an answer, this is, however, if we are continued to suppose that one of the ways special science; laws are different from basic laws. A basic law says that ‘F’s causes (or are), if there were, perhaps that aby explaining how, or why, or by what means F’s cause G’s, the law would have not been basic but derived.
 Typically - though variably - the mechanism that implements a special science law is defined over the micro - structure of the thing that satisfy the law. The answer to ‘how does.  Sunlight make plants photo - synthesize’? Its function implicates the chemical structure of plants: The answer to ‘how does freezing make water solid’? This question surely implicates the molecular structure of waters’ foundational elements, and so forth. In consequence, theories about how a law is implemented usually draw on or upon the vocabularies of two, or more levels of explanation.
 If you are specially interested in the peculiarities of aggregates of matter at the Lth level (in plants, or minds, or mountains, as it might be) then you are likely to be specially interested in implementing mechanisms at the L - 1th level (the ‘immediately’ mechanisms): This is because the characteristics of L - level laws can often be explained by the characteristics of their L - 1th level implementations. You can learn a lot about plants qua plants by studying their chemical composition. You learn correspondingly less by studying their subatomic constituents, though, no doubt, laws about plants are implemented, eventually, sub - atomically. The question thus arises of what mechanisms might immediately implement the intentional laws of psychology with that accounting for their characteristic features.
 Intentional laws subsume causal interactions among mental processes, that much is truistic. But, in this context, something substantive, something that a theory of the implementation of intentional laws will account for. The causal processes that intentional states enter into have a tendency to preserve their semantic properties. For example, thinking true thoughts are so, that an inclining inclination to casse one to think more thoughts that are also true. This is not small matter: The very rationality of thought depends on such fact, in that ewe can consider or place them for interpretations as that true thoughts that ((P ➞ Q) and (P)) makes receptive to cause true thought that ‘Q’.
 A good deal has happened in psychology - notably since the Viennese founder of psychoanalysis, Sigmund Freud (1856 - 1939) - has consisted of finding new and surprising cases where mental processes are semantically coherent under intentional characterizations. Freud made his reputation by showing that this was true even much of the detritus of behaviours, dreams, verbal slips and the like, even to free or word association and ink - blob coloured identification cards (the Rorschach test). Even so, it turns out that psychology of normal mental processes is largely a grist for the same normative intention. For example, it turns out to be theoretically revealing to construe perceptual processes as inferences that take specifications of proximal stimulations as premises and yield specifications, and that are reliably truth preserving in ecologically normal circumstances. The psychology of learning cries out for analogous treatment, e.g., for treatment as a process of hypothesis formation and ratifying confirmation.
 Intentional states, as or common - sense understands them, have both causal and semantic properties and that the combination appears to be unprecedented: Propositions are semantically evaluable, but they are abstract objects and have no casual powers. Onions are concrete particulars and have casual powers, however, they are not semantically evaluable. Intentional states seem to be unique in combining the two that is what so many philosophers have against them.
 Suppose, once, again, that ‘the cat is on the mat’. On the one hand, the thing as stated about the cat on the mat, is a concrete particular in good standing and it has,  qua material object, an open - ended galaxy of causal powers. (It reflects light in ways that are essential to its legibility; It exerts a small but in particular detectable gravitational effect upon the moon, and whatever. On the other hand, what stands concrete is about something and is therefore semantically evaluable: It’s true if and only if there is a cat where it says that there is. So, then, the inscription of ‘the cat is on the mat,’ has both content and causal powers, and so does my thought that the cat is on the mat.
 At this point, we are asked of how many words are there in the sentence. ‘The cat is on the mat’? There are, of course, at least two answers to this question, precisely because one can either count word types, of which there are five, or individual occurrences - known as tokens - of which there are six. Moreover, depending on how one chooses to think of word types, another answer is possible. Since the sentence contains definite articles, noun, a proposition and a verb, there are four grammatically different types of word in the sentence.
 The type/token distinction, understood as a distinction between sorts of thing and instances, is commonly applied to mental phenomena. For example, one can think of pain in the type way as when we say that we have experienced burning pain many times: Or, in the token way, as when we speak of the burning pain currently being suffered. The type/token distinction for mental states and events becomes important in the context of attempts to describe the relationship between mental and physical phenomena. In particular, the identity theory asserts that mental states are physical states, and this raises the question whether the identity in question is of types or tokens.
 Appreciably, if mental states are identical with physical states, presumably the relevant physical states are various sorts of neural state. Our concept of mental states such as thinking, sensing, and feeling and, and, of course, are different from our concepts of neural states, of whatever sort. Still, that is no problem for the identity theory. As J.J. Smart (1962) who first argued for the identity theory, and, emphasizes the requisite identity does not depend on our concepts of mental states or the meaning of mental terminology. For ‘a’ to be identical with ‘b’, both ‘a’ and ‘b’ must have exactly the same properties, however, the terms ‘a’ and ‘b’ need not mean the same. The principle of the indiscernibility of identical states that if ‘a’ is identical with ‘b’. Then every property that ‘a’ has ‘b’ has, and vice versa. This is sometimes known as Leibniz’s law.
 However, the problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c - fibres. Although a particular pain is the very same state as a neural firing, we identify that state in two different ways: As a pain and as a neural firing. The state will therefore have certain properties in virtue of which we identify it as neural firing, the properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which we identify it as a neural firing will be physical properties. This has seemed too many to lead to a kind of duality, at which the level of the properties of mental states. Even so, if we reject a dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states.
 The problem just sketched about mental properties is widely thought to be most pressing for sensations, since the painful quality of pains and the red quality of visualization in sensations that seem to be irretrievably non - physical. So even if mental states are all identicals with physical states, these states appear to have properties that are not physical. And if mental states do actually have non - physical properties, the identity of mental with physical states would not sustain the thoroughgoing mind - body materialism.
 A more sophisticated reply to the difficultly about mental properties is due independently to the forth - right Australian ‘materialist and together with J.J.C. Smart, the leading Australian philosophers of the second half of the twentieth century. D.M. Armstrong (1926 - ) and the American philosopher David Lewis (1941 - 2002), who argue that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which we identify states as thoughts or sensations will still be neutral as between being mental or physical, since anything can bear a causal relations to anything else. But causal connections have a better chance than simplify in some unspecified respect of capturing the distinguishing properties of sensations and thoughts.
 Early identity theorists insisted that the identity between mental and bodily events was contingent, meaning simply that the relevant identity statements were not conceptual truths. That leaves open the question of whether such identities would be necessarily true on other construals of necessity.
 American logician and philosopher, Saul Aaron Kripke (1940 - ) made his early reputation as a logical prodigy, especially through the work on the completeness of systems of modal logic. The three classic papers are ‘A Completeness Theorem in Modal Logic’ (1959, ‘Journal of Symbolic Logic’) ‘Semantical Analysis of Modal Logic’ (1963, Zeltschrift fur Mathematische Logik und Grundlagen der Mathematik) and ‘Semantical Considerations on Modal Logic (1963, Acta Philosohica Fennica). In Naming and Necessity’ (1980), Kripke gave the classic modern treatment of the topic of reference, both clarifying the distinction between names and ‘definite descriptions, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to a subject. His Wittgenstein on Rules and Private Language (1983) also proved seminal, putting the rule - following considerations at the centre of Wittgenstein studies, and arguing that the private language argument is an application of them. Kripke has also written influential work on the theory of truth and the solution of the ‘semantic paradoxes’.
 Nonetheless, Kripke (1980) has argued that such identities would have to be necessarily true if they were true at all. Some terms refer to things contingently, in that those terms would have referred to different things had circumstances been relevantly different. Kripke’s example is ‘The first Post - master General of the us of A, which, in a different situation, would  have referred to somebody other than Benjamin Franklin. Kripke calls these terms non - rigid designators. Other terms refer to things necessarily, since no circumstances are possible in which they would refer to anything else, these terms are rigid designators.
 If the term ‘a’ and ‘b’ refer to the same thing and both determine that thing necessarily, the identity statement ‘a = b’ is necessarily true. Kripke maintains that the term ‘pain’ and the term for the various brain states all determine the states they refer to necessarily: No circumstances are possible in which these terms would refer to different things. So, if pain were identical d with some particular brain state. But be necessarily identical with that state. Yet, Kripke argues that pain cannot be necessarily identical with any brain state, since the tie between pains and brain states plainly seems contingent. He concludes that they cannot be identical at all.
 Kripke notes that our intuition about whether an identity is contingent can mislead us. Heat is necessarily identical with mean molecular kinetic energy: No circumstances are possible in which they are not identical. Still, it may at first sight appear that heat could have been identical with some other phenomena, but it appears that this way, Kripke argues only because we pick out heat by our sensation of heat, which bears only a contingent - bonding to mean molecular kinetic energy. It is the sensation of heat that actually seems to be connected contingently with mean molecular kinetic energy, not with mean molecular kinetic energy, not the physical heat itself.
 Kripke insists, however, that such reasoning cannot disarm our intuitive sense that pain is connected only contingently with brain states. This is, because for a state to be pain is necessity for it to be felt as pain, unlike heat, in the case of pain there is no difference between the state itself and how that state is felt, and intuitions about the one are perforce intuitions about the other one are perforce intuitions about the other.
 Kripke’s assumption and the term ‘pain’ is open to question. As Lewis notes. One need not hold that ‘pain’ determines the same state in all possible situations indeed, the causal theory explicitly allows that it may not. And if it does not, it may be that pains and brain states are contingently identicals. But there is also a problem about some substantive assumption Kripke makes about the nature of pains, namely, those pains are necessarily felt as pains. First impression notwithstanding, there is reason to think not. There are times when we are not aware of our pains, for example, when we are suitably distracted, so the relationship between pains and our being aware of them may not be contingent after all, just as the relationship between physical heat and our sensation of heat is. And that would disarm the intuitions that pain is connected only contingently with brain states.
 Kripke’s argument focuses on pains and other sensations, which, because they have qualitative properties, are frequently held to cause the greater of problems for the identity theory. The American moral and political theorist Thomas Nagel (1937 - ) traces to general difficulty for the identity theory to the consciousness of mental states. A mental state’s being conscious, he urges, means that there is something it is like to be in that state. And to understand that, we must adopt the point of view of the kind of creature that is in the state. But an account of something is objective, he insists, only insofar as it is independents of any particular type of point of view. Since consciousness is inextricably tied to points of view, no objective account of it is possible. And that means conscious states cannot be identical with bodily states.
 The viewpoint of a creature is central to what that creature’s conscious states are like, because different kinds of crenatures have conscious states with different kinds of qualitative property. However, the qualitative properties of a creature’s conscious states depend, in an objective way, on that creature’s perceptual apparatus. we cannot always predict what anther creature’s conscious states are like, just as we cannot always extrapolate from microscopic to macroscopic properties, at least without having a suitable theory that covers those properties. But what a creature’s conscious states like depends in an objective way on its bodily endowment, which is itself objective. So, these considerations give us no reason to think that those conscious states are like is not also an objective matter.
 If a sensation is not conscious, there is nothing it’s like to have it. So Nagel’s idea that what it is like to have sensations is central to their nature suggests that sensations cannot occur without being conscious. And that in turn, seems to threaten their objectivity. If sensations must be conscious, perhaps they have no nature independently of how we ae aware of them, and thus no objective nature. Nonetheless, only conscious sensations seem to cause problems of the independent theory.
 The notion of subjectivity, as Nagel again, see, is the notion of a point of view, what psychologists call a ‘constructionist theory of mind’. Undoubtedly, this notion is clearly tied to the notion of essential subjectivity. This kind of subjectivity is constituted by an awareness of the world’s being experienced differently by different subjects of experience. (It is thus possible to see how the privacy of phenomenal experience might  be easily confused with the kind of privacy inherent in a point of view.)
 Point - of - view subjectivity seems to take time to develop. The developmental evidence suggests that even toddlers are abl e to understand others as being subjects of experience. For instance, as a very early age, we begin ascribing mental states to  other things - generally, to those same things to which we ascribe ‘eating’. And at quite an early age we can say what others would see from where they are standing.  We early on demonstrate an understanding that the information available is different from different perceiver. It is in these perceptual senses that we first ascribe the point - of view - subjectivity.
 Nonetheless, some experiments seem to show that the point - of - view subjectivity then ascribes to others is limited. A popular, and influential series of experiments by Wimmer and Perner (1983) is usually taken to illustrate these limitations (though there are disagreements about the interpretations, as such.) Two children  - Dick and Jane - watch as an experimenter puts a box of candy somewhere, such as in a cookie jar, which is opaque. Jane leaves the room. Dick is asked where Jane will look for the candies, and he correctly answers. ‘In the cookie jar’. The experimenter, in dick’s view, then takes the candy out of the cookie jar and puts it in another opaque place, a drawer, ay. When Dick is asked where to look for the candy, he says quite correctly. ‘In the drawer’. When asked where Jane will look for the candy when she returns. But Dick answers. ‘In the drawer’. Dick ascribes to Jane, not the point - of - view subjectivity she is likely ton have, but the one that fits the facts. Dick is unable to ascribe to Jane belief - his ascription is ‘reality driven - and his inability demonstrates that Dick does not as yet have a fully developed point - of - view subjectivity. 
 At around the age of four, children in Dick’s position do ascribe the like point - of - view subjectivity to children in Jane’s position (‘Jane will look in the cookie jar’): But, even so, a fully developed notion of a point - of - view subjectivity is not yet attained. Suppose that Dick and Jane are shown a dog under a tree, but only Dick is shown the dog’s arriving there by chasing a boy up the tree. If Dick is asked to describe, what Jane, who he knows not to have seen the dog under the tree. Dick will display a more fully developed point - of - view subjectivity only those description will not entail the preliminaries that only he witnessed. It turns out that four - year - olds are restricted by the age’s limitation, however, only when children are six to seven do they succeed.
 Yet, even when successful in these cases’ children’s point - of - view subjectivity is reality - driven. Ascribing a point - of - view, subjectivity to others is still in terms relative to information available. Only in our teens do we seem capable of understanding that others can view the world differently from ourselves, even when given access to the same information. Only then do we seem to become aware of the subjectivity of the knowing procedure itself: Interring the ‘facts’ can be coloured by one’s knowing procedure and history. There are no ‘merely’ objective facts.
 Thus, there is evidence that we ascribe a more and more subjective point of view to others: from the point - of - view subjectivity we ascribe being completely reality - drive, to the possibility that others have insufficient information, to they’re having merely different information, and finally, to their understanding the same information differently. This developmental picture seems insufficient familiar to philosophers - and yet well worth our thinking about and critically evaluating.
 The following questions all need answering. Does the apparent fact that our point - of - view subjectivity ascribed to others develop over time, becoming more and more of the ‘private’ notions, shed any light on the sort of subjectivity we ascribe to our own self? Do our self - ascriptions of subjectivity themselves become more and more ‘private’, metre and more removed both from the subjectivity of others and from the objective world? If so, what is the philosophical importance of these facts? At the last, this developmental history shows that disentangling our self from the world we live in is a complicate matter.
 Based in the fundament of reasonableness, it seems plausibility that we share of our inherented perception of the world, that ‘self - realization as ‘actualized’ of an ‘undivided whole’, drudgingly we march through the corpses to times generations in that we are founded of the last two decades. Here  we have been of a period of extraordinary change, especially in psychology. Cognitive psychology, which focuses on higher mental processes like reasoning, decision masking, problem solving, language processing and higher - level visual processing, has become - perhaps - the dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour. Largely as a result of this paradigm shift, the level of interaction between the disciplines of philosophy and psychology has increased dramatically.
 Nevertheless, developmental psychology was for a time dominated by the ideas  of the Swiss psychologist and pioneer of learning theory, Jean Piaget (1896 - 1980), whose primary concern was a theory of cognitive developments (his own term was ‘genetic epistemology). What is more, like modern - day cognitive psychologists, Piaget was interested in the mental representations and processes that underlie cognitive skills. However, Piaget’s genetic epistemology y never co - existed happily with cognitive psychology, though Piaget’s idea that reasoning is based in an internalized version of predicate calculus has influenced research into adult thinking and reasoning. One reason for the lack of declining side by side interactions between genetic epistemology and cognitive psychology was that, as cognitive psychology began to attain prominence, developmental psychologists were starting to question Piaget’s ideas. Many of his empirical claims about the abilities, or more accurately the inabilities, of children of various ages were discovered to be contaminated by his unorthodox, and in retrospect unsatisfactory, empirical methods. And many of his theoretical ideas were seen to be vague, or uninterpretable, or inconsistent, however.
 More than one of the central goals of thee philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies s exploited in th e sciences. Another common goal is to construct philosophically illuminating analysis or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. The philosophy of physics is another are a in which studies of this sort have been actively pursued. In undertaking this work, philosophers need not (and typically do not) assume that there is anything wrong with the science the y are studying. Their goal simply to provide e accounts of the theories, concepts, and explanatory strategies that scientists are using - accounts th at are more explicit, systematic and philosophically sophisticated that an  offered rather rough - and - ready accounts offered by scientists themselves.
 Cognitive psychology is in many was a curious and puzzling science. Many of the theorists put forward by cognitive psychologists make use of a family of ‘intentional’ concepts - like believing that ‘p’, desiring that ‘q’, and representing ‘r’ - which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.
 If a person ‘X’ thinks that ‘p’, desires that ‘p’, believes that ‘p’. Is angry at ‘p’ and so forth, then he or she is described as having a propositional attitude too ‘p?’. The term suggests that these aspects of mental life are well thought of in terms of a relation to a ‘proposition’ and this is not universally agreeing. It suggests that knowing what someone believes, and so on, is a matter of identifying an abstract object of their thought, than understanding his or her orientation towards more worldly objects.
 Once, again, the directness or ‘aboutness’ of many, if not all, conscious states have side by side their summing ‘intentionality’. The term was used by the scholastics, but belief thoughts, wishes, dreams, and desires are about things. Equally, we use to express these beliefs and other mental states are about things. The problem of intentionality is that of understanding the relation obtaining between a mental state, or its expression, and the things it is about. A number of peculiarities attend this relation. First, If I am in some relation to a chair, for instance by sitting on it, then both it and I am in some relation to a chair, that is, by sitting on it, then both it and I must exist. But while mostly one thinks about things that exist, sometimes (although this way of putting it has its problems) one has beliefs, hopes, and fears about things that do not, as when the child expects Santa Claus, and the adult fears snakes. Secondly, if I sit on the chair, and the chair is the oldest antique chair in all of Toronto, then I am on the oldest antique chair in the city of Toronto. But if I plan to avoid the mad axeman, and the mad axeman is in fact my friendly postal - carrier. I do not therefore plan to avoid my friendly postal - carrier. The extension of such is the predicate, is the class of objects that is described: The extension of ‘red’ is the class of red things. The intension is the principle under which it picks them out, or in other words the condition a thing must satisfy to be truly described by the predicate. Two predicates ‘ . . . are a rational animal. ‘. . . is a naturally feathered biped might pick out the same class but they do so by a different condition? If the notions are extended to other items, then the extension of a sentence is its truth - value, and its intension a thought or proposition: And the extension of a singular term is the object referred to by it, if it so refers, and its intension is the concept by means of which the object is picked out. A sentence puts a predicate on other predicate or term with the same extension can be substituted without it being possible that the truth - value changes: If John is a rational animal and we substitute the coexistence ‘is a naturally feathered biped’, then ‘John is a naturally featherless biped’, other context, such as ‘Mary believes that John is a rational animal’, may not allow the substitution, and are called ‘intensional context’.`
 What remains of a distinction between the context into which referring expressions can be put. A contest is referentially transparent if any two terms referring to the same thing can be substituted in a ‘salva veritate’, i.e., without altering the truth or falsity of what is aid. A context is referentially opaque when this is not so. Thus, if the number of the planets is nine, then the number of planets is odd, and has the same truth - value as ‘nine is odd’: Whereas, ‘necessarily the number of planets is odd’ or ‘x knows that the number of planets is odd’ need not have the same truth - value as ‘necessarily nine is odd have the same truth - value as ‘necessarily nine in odd’ or ‘x knows that nine is odd’. So while’ . . . in odd’ provides a transparent context, ‘necessarily . . . is odd’ and ‘x knows that . . . is odd’ do not.
 Here, in a point, is the view that the terms in which we think of some area are sufficiently infected with error for it be better to abandon them than to continue to try to give coherence theories of their use. Eliminativism should be distinguished from scepticism which claims that we cannot know the truth about some area: Eliminativism claims that there is no truth there to be known, in the terms with which we currently think. An eliminativist about theology simply councils abandoning the terms or discourse of theology, and that will include abandoning worries about the extent of theological knowledge. Eliminativist in the philosophy of mind council abandoning the whole network of terms mind, consciousness’ self, Qualia that usher in the problems of mind and body. Sometimes the argument for doing this is that we should wait for a supposed future e understanding of ourselves, based on cognitive science and better than our current mental descriptions provide, something it is supposed that physicalism shows that no mental description could possibly be true.
 It seems, nonetheless, that of a widespread view that either the concept is indispensable, we must either declare seriously that science be that it cannot deal with the central feature of the mind or explain how serious science may include intentionality. One approach in which we communicate fears and beliefs have a two - faced aspect, involving both the object referred to, and the mod e of presentation under which they are thought of. we can see the mind as essentially directed onto existent things, and extensionally relate to them. Intentionality then becomes a feature of language, than a metaphysical or ontological peculiarity of the mental world.
 While cognitive psychologists occasionally say a bit about the nature of intentional concepts and the explanations that exploit them, their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile ground for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. Jerry Fodor’s ‘Language of Thought’ (1975) was a pioneering study in this genre, one that continues to have a major impact on the field.
 The relation between language and thought is philosophy’s chicken - or - egg problem. Language and thought are evidently importantly related, but how exactly are they related? Does language come first and make thought possible or vice versa? Or are they counter - balanced and parallel with each making the other possible?
 When the question is stated this of such generality, however, no unqualified answer is possible. In some respect language is prior, in other respects thought is prior. For example, it is arguable that a language is an abstract pairing of expressions and meanings, a function, in the set - theatric sense, in that, this makes sense of the fact that Esperanto is a language no one speaks, and it explains why it is that, while it is a contingent fact that ‘La neige est blanche’ means that snow is white among the French speaking peoples. It is a necessary truth that it means that in French and English are abstract objects in this sense, then they exist whether or not anyone speaks them: They even exist in possible worlds in which there are no thinkers. In this respect, then, language, as well as such notions as meaning and truth in a language, is prior to thought.
 But even if languages are construed as abstractive expression - meaning pairing, they are construed what was as abstractions from actual linguistic practice - from the use of language in linguistic communicative behaviour - and there remains a clear sense in which language is dependent on thought. The sequence of marks, ‘Point Peelie is the  most southern point of Canada’s geographical boundaries’, means among us that Point Peelie is the most southern lactation that hosts thousands of migrating species. Had our linguistic practice been different, Point Peelie is a home for migrating species and an attraction of hundreds of tourists, that in fact, that the province of Ontario is  a home and a legionary resting point for thousands of migrating species, have nothing at all among us. Plainly means that Point Peelie is Canada’s most southern location in bordering between Canada and the Unites State of America. Nonetheless, Point Peelie is special to Canada has something to do with the belief and intentions underlying our use of words and structure that compose the sentence of Canada’s most southern point and yet nearest point in bordering of the United States. More generally, it is a platitude that the semantic features that marks and sounds have a population of tourist and migrating species are at least partly determined by the attitudinal values for which this is the platitude, of course, which says that meaning depends, partially, on the use in  communicative behaviours. So, here, is one clear sense in which language is dependent on thought: Thought is required to imbue marks and sounds with the somantic features they have as to host of populations.
 The sense in which language does depend on thought can be wedded to the sense in which language does not depend on thought in the following way. we can say, that a sequence of marks or sounds (or, whatever) ‘ς’ means ‘q’ in a language ‘L’, construed as a function from expressions onto meaning, iff L(ς) = q. This notion of meaning - in - a - language, like the notion of a language, is a mere set - theoretic notion that is independent of thought in that it presupposes nothing about the propositional attitude of language users: ‘ς’ can mean ‘q’ in ‘L’ even if ‘L’ has never very been used? But then, we can say that ‘ς’ also means ‘q’ in a population ‘P’. The question of moment then becomes: What relation must a population ‘P’ bear to a language ‘L’ in order for it to be the case that ‘L’ is a language of ‘P’, a language member s of ‘P’ actually speak? In whatever the answer to this question is, this much seems right: In order for a language to be a language of a population of speakers, those speakers must produce  sentences of the language in their communicative behaviour. Since such behaviour is intentional, we know that the notion of a language’s being the language of a population of speakers presupposes the notion of thought. And since that notion presupposes the notion of thought, we also know that the same is true of the correct  account of the semantic features expression have in populations of speakers.
 This is a pretty thin result, not on likely to be disputed, and the difficult question remain. we know that there is some relation ’R’ such that a adaptive ‘L’ is used by a population ‘P’ iff ‘L’ bears ‘R’ to ‘P’. Let us call this reflation, whatever it turns out to be, the ‘actual - language relation’. we know that to explain the semantic features expressions have among those who are apt to produce those expressions, and we know that any account of the relation must require language users to have certain propositional attitudes. But how exactly is the actual language relation to be explained in terms of the propositional attitudes of language users? And what sort of dependence might those propositional attitude in turn have on language or on the semantic factures that are fixed by the actual - language relation? Further, what of the relation of language to thought, before turning to the relation of thought to language.
 All must agree that the actual - language relation, and with it the semantic features linguistic items have among speakers, is at least, partly determined by the propositional attitudes of language users. This, however, leaves plenty of room for philosophers to disagree both about the extent of the determination and the nature of the determining propositional attitude. At one end of the determination spectrum, we have those who hold that the actual - language relation is wholly definable in terms on non - semantic propositional attitudes. This position in logical space is most taken as occupied by the programme, sometimes called intention - based semantics, of the  English philosopher of language Paul Herbert Grice (1913 - 1988), introducing the important concept of an ‘implicature’ into the philosophy of language, arguing that not everything that is said is direct evidence for the meaning of some term, since many factors my determine the appropriateness of remarks independently of whether they are actually true. The point, however, undermines excessive attention to the niceties in conversation as reliable indicators of meaning, a methodology characteristic of ‘linguistic philosophy’. In a number of elegant papers which identities is with a complex of sentences which it is uttered. The psychological is thus used to explain the semantic, and the question of whether this is the correct priority has prompted considerable subsequent discussion.
 The foundational notion in this enterprise is a certain notion of ‘speaker - semantics’. It is the species of communicative behaviour reported when we say, for example, that in uttering ‘II pleut’. Pierre meant that it was raining, or that in waving her hand, the Queen meant that you were to leave the room. Intention - based semantics seeks to define this notion of speaker meaning wholly in terms of communicators’ audience - directed intentions and without recourse to any semantic notions. Then it seeks to define the actual - language relation in terms of the now - defined notion of speaker meaning, together with certain ancillary notions such as that of a conventional regularity or practice, themselves defined wholly in terms of non - semantic propositional attitudes. The definition in terms of speaker meaning of other agent - semantic notions, such as the notions of an illocutionary act, and this, is part of the intention - based semantics programme.
 Some philosophers object to intention - based semantics because they think it precludes a dependence of thought on the communicative use of language. This is a mistake, in that if intention - based semantics definitions are given a strong reductionist reading, as saying that public - language semantic properties (i.e., those semantic properties that supervene on use in communicative behaviour) just are psychological properties, it might still be that one could not have propositional attitudes unless one had mastery of a public - language, insofar as the concept of supervenience has seen increasing service in philosophy of mind. The thesis that the mental is supervenient on the physical - roughly, the claim that the mental character of a thing is wholly determine d by its physical nature - has played a key role in the formulation of some influential positions on the mind - bod y problem. In particular, versions of non - reductive physicalism. Mind - body supervenience has also been invoked in arguments for or against certain specific claims about the mental, and has been used to devise solutions to some central problems about the mind - for example, the problem of mental causation - such that the psychological level of description carries with it a mode of explanation which ‘has no echo in physical theory’.
 The ‘content as to infer about mental events, states or processes with content include seeing that the door is shut: Believing you are being followed, and calculating the square root of 2. What centrally distinguishes states, events, or processes - are basic to simply being states - with content is that they involve reference to objects, properties or relations. A mental state with content can fail to refer, but there always exists a specific condition for a state with content to refer to certain things. When the state has a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them. It leaves open the possibility that unconscious states, as well as conscious states, have content. It equally allows the states identified by an empirical, computational psychology to have content. A correct philosophical understanding of this general notion of content is fundamental not only to the philosophy of mind and psychology, but also to the theory of knowledge and to metaphysics.
 There is a long - standing tradition that emphasizes that the reason - giving relation is a logical or conceptual one. One way of bringing out the nature of this conceptual link is by the construction of reasoning, linking the agent’s reason - providing states with the states for which they provide reasons. This reasoning is easiest to reconstruct in the case of reason for belief where the contents of the reason - providing beliefs inductively or deductively support the content of the rationalized belief. For example, I believe my colleague is in her room now, and my reasons are (1) she usually has a meeting in her room at 9:30 on Mondays and (2) it is to accept it as true, and it is relative to the objective of reaching truth that the rationalizing relations between contents are set for belief. They must be such that the truth of the premises makes likely the truth of the conclusion.
 The causal explanatorial approach to reason - giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. It also provides a motivation for the reduction of intentional characterization as to extensional ones, in an attempt to fit such intentional causality into a fundamentally materialist world picture. The very nature of the reason - giving relation, however, can be seen to render such reductive projects unrealizable. This, therefore, leaves causal theorists with the task of linking intentional and non - intentional levels of description in such a way as to accommodate intentional causality, without either over - determination or a miraculous coincidence of prediction from within distinct causally explanatorial frameworks.
 The idea that mentality is physically realized is integral to the ‘functionalist’ conception of mentality, and this commits most functionalists to mind - body supervenience in one form or another. As a theory of mind, supervenience of the mental - in the form of strong supervenience, or at least global supervenience - is arguably a minimum commitment of physicalism. But can we think of the thesis of mind - body supervenience itself as a theory of the mind - body relation - that is, as a solution to the mind - body problem?
 A supervenience claim consists of covariance and a claim of dependence e (leaving aside the controversial claim of non - reducibility). This means that the thesis th at the mental supervenience on the physical amounts to the conjunction of the two claims (1) strong or global supervenience, and (2) the mental depends on the physical. However, the fact that the thesis says nothing about just what kind of dependence is involved in mind - body supervenience. When you compare the supervenience thesis with the standard positions on the mind - body problem, you are struck by what the supervenience thesis does not say. For each of the classic mind - body theories has something to say, not necessarily anything veery plausible, about the kind of dependence that characterizes the mind - body relationship. According to epiphenomenalism, for example, the dependence is one of causal dependence is one of casual dependence: On logical behaviourism, dependence is rooted in meaning dependence, or definability: On the standard type physicalism, the dependence is one that is involved in the dependence of macro - properties and son forth. Even Wilhelm Gottfried Leibniz (1646 - 1716) and Nicolas Malebranche (1638 - 1715) had something to say about this: The observed property convariation is due not to a direct dependancy relation between mind and body but rather to divine plans and interventions. That is, mind - body convariation was explained in terms of their dependence on a third factor - a sort of ‘common cause’ explanation.
 It would seem that any serious theory addressing the mind - body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence. However, there is reason to think that ‘supervenient dependence’ does not signify a special type of dependence reflation. This is evident when we reflect on the varieties of ways in which we could explain the supervenience relation holds in a given case. For example, consider the supervenience of the moral on the descriptive the ethical naturalist will explain this on the basis of definability: The ethical intuitionist will say that the supervenience, and also the dependence, seems the brute fact that you discern through moral intuition. And the prescriptivist will attribute the supervenience to some form of consistency requirement on the language of evaluating and prescription. And distinct from all of these is mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its parts. What all this shows is that there is no single type of dependence relation common to all cases of supervenience: Supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, mereological dependence and so forth.
 If this is right, the supervenience thesis concerning the mental does not constitute an explanatory account of the mind - body relation, on a par with the classic alternatives on the mind - body problem. It is merely the claim that the mental covaried in a systematic way with the physical, an that this is due to a certain dependence relation yet to be specified and explained. In this sense, the supervenience thesis states the mind - bod y problem than offering a solution to it.
 There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is this: To explicate mind - body supervenience as a special case of mereological supervenience - that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is metaphysical and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its microproperties, i.e., the way its constituents, tissue, and do on, are organized and function. This more specific supervenience thesis may well be a serious theory of the mind - body relation that can compete with the classic options in the field.
 Previously, our considerations had fallen to arrange in making progress in the betterment of an understanding, fixed on or upon the alternatives as to be taken, accepted or adopted, even to bring into being by mental or physical selection, among alternates that generally are in agreement. These are minded in the reappearance of confronting or agreeing with solutions precedently recognized. That is of saying, whether or not this is plausible (that is a separate question), it would be no more logically puzzling than the idea that one could not have any propositional attitude unless one had one’s with certain sorts of contents. Tyler Burge’s insight is partly determined by the meanings of one’s words in one’s linguistic community. Burge (1979) is perfectly consistent with any intention - based semantics, reduction of the semantic to the psychological. Nevertheless, there is reason to be sceptical of the intention - based semantic programme. First, no intention - based semantic theorist has succeeded in stating a sufficient condition for more difficult task of starting a necessary - and - sufficient condition. And is a plausible explanation of this failure is that what typically makes an utterance an act of speaker meaning is the speaker’s intention to be meaning or saying something, where the concept of meaning or saying  used in the content of the intention is irreducibly semantic. Second, whether or not an intention - based semantic way of accounting for the actual - language relation in terms of speaker meaning. The essence of the intention - based semantic approach is that sentences used as conventional devices for making known a speaker’s communicative understanding is an inferential process wherein a hearer perceives an utterance and, thanks to being party to relevant conventions or practices, infers the speaker’s communicative intentions. Yet it appears that this inferential model is subject to insuperable epistemological difficulties, and. Third, there is no pressing reason to think that the semantic needs to be definable in terms of the psychological. Many intention - based semantic theorists have been motivated by a strong version of physicalism which requires the reduction of all intentional properties (i.e., all semantic and propositional - attitude properties) to physical or at least topic - neutral, or functional, properties, for it is plausible that there could be no reduction to the semantic and the psychological to the physical without a prior reduction of the semantic to the psychological. But it is arguable that such a strong version of physicalism is not what is required in order to fit the intentional into the natural order.
 What is more, in the dependence of thought on language for which this claim is that propositional attitudes are relations to linguistic items which obtain, at least, partially, by virtue of the content those items have among language users. Thus, position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. The position is motivated by two considerations (a) The supposition that believing is a relation to things that believing is a relation to things believed, for which of things have truth values and stand in logical relations to one another, and (b) The desires not to take things believed to be propositions - abstract things believed to be propositions - abstract, mind - and essentially the truth conditions that have. Now the tenet (a) is well motivated: The relational construal of propositional attitude s is probably the best way to account forms the quantitative in, ‘Harvey believes something nasty about you’. But there are probable mistakes with taking linguistic items, rather than propositions, as the objects of belief In the first place, If Harvey believes that Flounders snore’ is represented along the lines that of (‘Harvey, but flounder snore’), then one could know the truth expressed by the sentience about Harvey without knowing the content of his beliefs: For one could know that he stands in the belief relation to ‘flounders snore’ without knowing its content. This is unacceptable, as in the second place, if Harvey believes that flounders snore, then what he believes that flounders snore, then what he believes - the reference of ‘that flounders snore’ - is that flounders snore. But what is this thing that flounders snore? well, it is abstract, in that it has no spatial location. It is mind and language independent, in that it exists in possible worlds for which  there are neither thinkers nor speakers: and, necessarily, it is true if flounders snore. In short, it is a proposition - an abstract mind, and language - independent thing that has a truth condition and has essentially the truth condition it has.
 A more plausible way that thought depend s on language is suggested b y the topical thesis that we think in a ‘language of thought’. On one reading, this is nothing more than the vague idea that the neural states that realize our thoughts ‘have elements and structure in a way that is analogous to the way in which sentences have elements and structure’. Nonetheless, we can get a more literal rendering by relating it to the abstract conception of languages already recommended. On this conception, a language is a function from ‘expressions’ - sequences of marks or sounds or neural states or whatever - onto meaning, for which meanings will include the propositions of our propositional altitudes relations relate us to. we could then read the language of though t hypothesis as the claim that having propositional altitudes require s standing in a certain relation to a language whose expressions are neural state. There would now be more than one ‘actualized - language relations. The one earlier of mention, the one discussed earlier might be better called the ‘public - language relation’. Since the abstract notion of a language ha been so weakly construed. It is hard to see how the minimal language - of - thought proposal just sketched could fail to be true. At the same time, it has been given no interesting work to do. In trying to give it more interesting work, further dependencies of thought on language might come into play. For example, it has been claimed that the language of thought of a claim that the language of thought of a public - language user is the public language she uses: Her neural sentences are related to her spoken and written sentences in something like the way the written sentences are related to her spoken sentences. For another example, I that it might be claimed that even if one’s language of thought is something like the way her written sentences are related to he r spoken sentences. For example, it might be claimed that even if one’s language of thought is distinct from one’s public language, the language  - of thought relations makes presuppositions about the public - language relations in way that make the content of one’s words in one’s public language community.
 Tyler Burge, has in fact shown that there is a sense for which though t content is dependent on the meanings of words in one’s linguistic communications. Alfred’s use of ‘arthritis’ is fairly standard, except that he is under the misconception that arthritis is not confined to the joints, he also applies the word to rheumatoid ailments not in the joints. Noticing an ailment in his thigh that is symptomatically like the disease in his hands and ankles, he says, to his doctor, ‘I have arthritis in the thigh’: Here Alfred is expressing his false belief that he has arthritis in the thigh. But now consider a counter - factual situation that differs in just one respect (and, whatever it entails): Alfred’s use of ‘arthritis’ is the correct use in his linguistic community. In this situation, Alfred would be expressing a true belief when he says ’I have arthritis in the thigh’. Since the proposition he believes is true while the proposition that he has arthritis in the thigh is false, he believes some other proposition. This shows that standing in the belief relation to a proposition can be partly determined by the meanings of words on one’s public language. The Burge phenomenon seems real, but it would be nice to have a deep explanation of why thought content should be dependent on language in this way.
 Finally, there is the old question of whether, or to what extent, a creature who does not understand a natural language can have thoughts. Now it seems pretty compelling that higher mammals and humans raised without language have their behaviour controlled by mental state that are sufficiently like our beliefs, desires, and intentions to share those labels. It also seems easy to imagine non - communicating creatures who have sophisticated mental lives (the yy build weapons, dams, bridges, have clever hunting devices, and so on). At the same time, ascription of particular contents to non - language - using creatures typically seem exercises in loose speaking (does the dog really believe that there is a bone in the yard?), and it is no accident that, as a matter of fact, creatures who do not understand a natural language have at best primitive mental lives. There is no accepting the primitive mental lives of animals account for their failure to master natural language, but the better explanation may be Chomsky’s faculty unique to our species. As regards the inevitably primitive mental life of another wise normal humans raised without language, this might simply be due to the ignorance and lack of intellectual stimulation such a person would be doomed to. On the other hand, it might also be that higher thought requirements of a neural language with structures comparable to that of a natural language, and that such neural language ss are somehow acquired as the ascription of content to the propositional - attitude states of language less creatures is a difficult topic that needs more attention. It is possible of our ascriptions of propositional content, we will realize that these ascriptions are egocentrically based on a similarity to the language in which we express our beliefs. we might then learn that we have no principled basis for ascribing propositional content to a creature who does not speak something, or who does not have internal states with natural - language - like structure. It is somewhat surprising how little we know about thought’s dependence on language.
 The Language of Thought hypothesis has a compelling neatness about it. A thought is depicted as a structure of internal representational elements combined in a lawful way, and plays a certain functional role in an internal processing economy. So that the functionalist thinks of mental states and events as causally mediating between a subject’s sensory inputs and that subjects ensuing behaviour. Functionalism itself is the stronger doctrine that what makes a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - is the functional relationist bears to the subject’s perceptual stimuli, behavioural responses, and other mental states.
 The representational theory of the mind arises with the recognition that thoughts have contents carried by mental representations.
 Nonetheless, theorists seeking to account for the mind’s activities have long sought analogues to the mind. In modern cognitive science, these analogues have provided the basses for simulation or modelling of cognitive performance seeing that cognitive psychology simulate one way of testings in a manner comparable to the mind, that offers support for the theory underlying the analogue upon which the simulation is based simulation, however, also serves a heuristic function, suggesting ways for which the mind might gainfully characteristically operate in physical terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words, where it is clear that there is no connection between the physical properties of a word and what it denotes (the problem remains for Iconic representation). What kind of mental representation might support denotation and attribution if not linguistic representation? Perhaps, when thinking within the peculiarities that the mind and attributions thereof, being among the semantic properties of thoughts, are that ‘thoughts’ in having content, posses semantic properties, however, if thoughts denote and precisely attribute, sententialism may be best positioned to explain how this is possible.
 Beliefs are true or false. If, as representationalism had it, beliefs are relations to mental representations, then beliefs must be relations to representations that have truth values among their semantic properties. Beliefs serve a function within the mental economy. They play a central part in reasoning and, thereby, contribute to the control of behaviour. To be rational, a set of beliefs, desires, and actions, also perceptions, intentions, decisions, must fit together in various ways. If they do not, in the extreme case they fail to constitute a mind at all - no rationality, no agent. This core notion of rationality in philosophy of mind thus concerns a cluster of personal identity conditions. That is, ‘Holistic’ coherence requirements on or upon the system of elements comprising a person’s mind, related conception and epistemic or normative rationality  are key linkages among the cognitive, as distinct rom qualitative mental stats. The main issue is characterizing these types of mental coherence.
 Closely related to thought’s systematicity is its productivity to have a virtual unbounded competence to think ever more complex novel thoughts having certain clear semantic ties to their less complex predecessor. Systems of mental representation apparently exhibit mental representation apparently exhibit the sort of productivity distinctive of spoken languages. Sententialism accommodates this fact by identifying the productive system of mental representation with a language of thought, the basic terms of which are subject to a productive grammar.
 Possibly, in reasoning mental representations stand to one another just as do public sentences in valid ‘formal derivations’. Reasoning would then preserve truth of belief by being the manipulation of truth - valued sentential representations according to rules so selectively sensitive to the syntactic properties of the representations as to respect and preserve their semantic properties. The sententialist hypothesis is thus that reasoning is formal inference. It is a process tuned primarily to the structure of mental sentences. Reasoners, then, are things very much like classical programmed computers. Thinking, according to sententialism, may then be like quoting. To quote an English sentence is to issue, in a certain way, a token of a given English sentence type: It is certainly not similarly to issue a token of every semantically equivalent type. Perhaps, thought is much the same. If to think is to token, a sentence in the language of thought, the sheer tokening of one mental sentence need not insure the tokening of another formally distinct equivalents, hence, thought’s opacity.
 Objections to the language of thought come from various quarters. Some will not tolerate any edition of representationalism, including Sententialism: Others endorse representationalism while denying that mental representations could involve anything like a language. Representationalism is launched by the assumption that psychological stat es ae relational, that being in psychological state minimally involves being related to something. But perhaps, psychological states are not at all relational. Verbalism begins by denying that expressions of psychological states are relational, infers that psychological states themselves are monadic and, thereby, opposes classical versions of representationalism, including sententialism.
 What all this is supposed to show, was that Chomsky and advances in computer science, the 1960s saw a rebirth of ‘mentalistic’ or ‘cognitivist’ approaches to psychology and the study of mind.
 These philosophical accounts o cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists have just gotten it wrong. There is, however, a very different way in which philosophers have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two taken for our considerations are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.
 Perhaps, to an approach that is mos radical is the proposal that cognitive psychology should recast its theories and explanations in a way that does not appeal to intentional properties or ‘syntactic’ properties. Somewhat less radical is the suggestion that we can define a species of representation, which does supervene an organism’s physiology, and that psychological explanations that appeal to ordinary (‘wide’) intentional properties can be replaced by explanations that invoke only their narrow counterparts. Nonetheless, many philosophers have urged that the problem lies in the argument, not in the way that cognitive psychology might be modified. However, many philosophers have urged that the problem lis in the argument, not in the way that cognitive psychology goes about its business. The most common critique of the argument focuses on the normative premise - the one that insists that psychological explanations ought not to appeal to ‘wide’ properties that fail to supervene on physiology. Why should it bot be that psychological explanations appeal to wide properties, the critics ask? : What exactly is wrong with psychological explanations invoking properties that do not supervene on physiology? Various answers have been proposed in the literature, though they typically end up invoking metaphysical principles that are less clear and less plausible than the normative thesis they are supposed to support.
 Given to any psychological property that fails to supervene on physiology, it is trivial to characterize a narrow correlated property that does supervene. The extension of the correlate property includes all actual and possible objects in the extension of the original property, plus all actual and possible physiological duplicates of those objects. Theories originally stated in terms of wide psychological properties sated in terms of wide psychological properties can be recast in terms of their descriptive or explanatory power. It might be protested that when characterized in this way, narrow belief and narrow content are not really species of belief and content at all. Nevertheless, it is far from clear how this claim could be defended, or why we should care if it turns out to be right.
 The worry about the ‘naturalizability’ of intentional properties is much harder to pin down. According to Fodor, the worry derives from a certain ontological intuition: That there is no place for intentional categories in a physicalistic view of the world, and thus, that the semantic and/or intentionality will prove permanently recalcitrant to integration in the natural order. If, however, intentional properties cannot be integrated into the natural order, then presumably they ought to be banished from serious scientific theorizing. Psychology should have no truck with them. Indeed, if intentional properties have no place in the natural order, then nothing in the natural world has intentional properties, and intentional states do not exist at all. So goes the worry. Unfortunately, neither Fodor nor anyone else has said anything very helpful about what is required to ‘integrate’ intentional properties into the natural order. There are, to be sure, various proposals to be found in the literature. But all of them seem to suffer from a fatal defect. On each account of what is required to naturalize a property or integrate it into the natural order, there are lots of perfectly respectable non - intentional scientific or common - sense properties that fail to meet the standards. Thus, all the proposals that have been made so far, end up being declined and thrown out.
 Now, or course, the fact that no one has been able to give a plausible account of what is required to ‘naturalize’ the intentional may indicate nothing more than that their project is a difficult one. Perhaps with further work a more plausible account will be forthcoming. But one might also offer a very different diagnosis of the failure of all accounts of ‘naturalizing’ that have so far been offered. Perhaps the ‘ontological intuition’ that underlies the worry about integrating the intentional into the natural order is simply muddled. Perhaps, there is no coherent criterion of naturalization or naturalizability that all properties invoked in respectable science must meet, as, perhaps, that this diagnosis is the right one. Until those who are worried about the naturalizability of the intentional provide us with some plausible account of what is required of intentional categories if they are to find a place in ‘a physicalistic view of the world’. Possibly we are justified in refusing to take their worry seriously.
 Recently, John Searle (1992) has offered a new set of philosophical arguments aimed at showing that certain theories in cognitive psychology are profoundly wrong - headed. The theories that are the target of computational explanations of various psychological capacities - like the capacity to recognize grammatical sentences, or the capacity to judge which of two objects in one ‘s visual field is further away. Typically, these theories are set out in the form of a computer program - a set of rules for manipulating symbols - and the explanations offered for the exercise of the capacity in question is that people’s brains are executing the program. The central claim in Searle’ s critique is that being a symbol or a computational stat e is not an ‘intrinsic’ physical feature of a computer state or a brain state. Rather, being a symbol is an ‘observer relative’ feature. However, Searle maintains, only intrinsic properties of a system can play a role in causal explanations of how they work. Thus, appeal to symbolic or computational states of the brain could not possibly play a role in a ‘casual account of cognition in knowledge’.
 All of which, the above aforementioned surveyed, does so that implicate some of the philosophical arguments aimed at showing that cognitive psychology is confusing and in need of reform. My reaction to those arguments was none too sympathetic. In each case, it was maintained to the philological argument that is problematic, not the psychology it is criticizing.
 It is fair to ask where we get the powerful inner code whose representational elements need only systematic construction to express, for example, the thought that cyclotrons are bigger and more than vast than black holes. Nonetheless, on this matter, the language of thought theorist has little to say. All that concept learning could be, assuming it is to be some kind of rational process and not due to mere physical maturation or a bump on the head. According to the language of thought theorist, is the trying out of combinations of existing representational elements to see if a given combination captures the sense (as evidenced in its use) of some new concept.  The consequence is that concept learning, conceived as the expansion of our representational resources, simply does not happen. What happens instead is that we work with a fixed, innate repertoire of elements whose combination and construction must express any content we an ever learn to understand. And note that it is not the trivial claim that in some sense the resources a system starts with must set limits on what knowledge it can acquire. For these are limits which flow not, for example, from sheer physical size, number of neurons, connectivity of neurons, and so forth. But from a base class of genuinely representational elements. They are more like the limits that being restricted to the propositional calculus would place on the expressive power of a system than, say, the limits that having a certain amount of available memory storage would place on one.
 But this picture of representational stasis in which all change consists in the redeployment of existing representational resources, is one that is fundamentally alien to much influential theorizing in developmental psychology. The prime example of a developmentalist who believed in a much stronger forms a much stronger form in genuine expansion of representational power at the very heart of a model of human development. In a similar vein, recent work in the field of connectivism seems to open up the possibility of putting well - specified models of strong representational change back into the centre of cognitive scientific endeavours.
 Nonetheless, the understanding of how the underlying combinatoric code ‘develops’ the deep understanding of cognitive processes, than understanding the structure and use of the code itself (though, doubtless the projects would need to be pursued hand - in - hand).
 The language of thought depicts thoughts as structures of concepts, for which in turn exist as elements (for any basic concept) or concatenations of elements (for the rest) in the inner code. The intentional states, as common - sense understands them, have both causal and semantic properties and that the combination appears to be unprecedented. However, a further problem about inferential role semantics is that it is, almost invariably, suicidally holistic. it seems, that, if externalism is right, then (some of) the intentional properties of thought are essentially ‘extrinsic’: They essentially involve mind - to - world relations. All and all, in assuming that the computational role of a mental representation is determined entirely by its intrinsic properties, such properties of its weigh t, shape, or electrical conductivity as it might be. , hard to see how the extrinsic properties: Which is to say, that it is hard to see how there could be computationally sufficient conditions for being in an intentional state, for which is to say that it is hard to see how the immediate implementation of intentional laws could be computational.
 However, there is little to be said about intrinsic relation s between basic representational items. Even bracketing the (difficult) question of which, if any words in our public language may express content s which have as their vehicles atomic items in the language of thought (an empirical question on which it is to assume that Fodor to be officially agnostic), the question of semantic relations between atomic items in the language of thought remains. Are there any such relations? And if so, in what do they consist? Two thought s are depicted as semantically related just in casse they share elements themselves (like the words of public language on which they are modelled) seem to stand in splendid isolation from one another. An advantage of some connectionist approaches lies precisely in their ability to address questions of the interrelation of basic representational elements (in act, activation vectors) by representing such items as location s in a kind of semantic space. In such a space related contents are always expressed by related representational elements. The connectionist’s conception of significant structure thus goes much deeper than the Fodorian’s. For the connectionist representations need never be arbitrary. Even the most basic representational items will bear non - accidental relations of similarity and difference to one another. The Fodorian, having reached representational bedrock, must explicitly construct any such further relations. They do not come for free as a consequence ee of using an integrated representational space. Whether this is a bad thing or a goo one will depend, of course, on what kind of facts we need to explain. But it is to suspect that representational atomism may turn out to be a conceptual economy that a science of the mind cannot afford.
 The approach for ascribing contents must deal with the point that it seems metaphysically possible for here to be something that in actual and counterfactual circumstances behaves as if it enjoys states with content, when in fact it does not. If the possibility is not denied, this approach must add at least that the states with content causally interact in various ways with one - another, and also causally produce intentional action. For most causal theories, however, the radical separation of the causal and rationalizing role of reason - giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way which they rationalize.  One way of putting this requirement is that reason - giving states not only cause, but also causally explain their explananda.
 On most accounts of causation an acceptance of the causal explanatory role of reason - giving connections requires empirical causal laws employing intentional vocabulary. It is arguments against the possibility of such laws that have, however, been fundamental for those opposing a causal explanatorial view of reasons. What is centrally at issue in these debates is the status of the generalizations linking intentional states to each other, and to ensuing intentional acts. An example of such a generalization would be, ‘If a person desires ‘X’, believes ‘A’ would be a way of promoting  ‘X’, is able to ‘A’ and has no conflicting desires than she will do ‘A’. For many theorists such generalizations are between desire, belief and action. Grasping the truth of such a generalization is required to grasp the nature of the intentional states concerned. For some theorists the a priori elements within such generalization s as empirical laws. That, however, seems too quick, for it would similarly rule out any generalizations in the physical sciences that contain a priori elements, as a consequence of the implicit definition of their theoretical kinds in a causal explanation theory. Causal theorists, including functionalist in philosophy of mind, can claim that it is just such implicit definition that accounts for th a priori status of our intentional generalizations.
 The causal explanatory approach to reason - giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. It also provides a motivation for the reduction of intentional characteristics to extensional ones, on an attempt to fit intentional causality into a fundamentally materialist world picture. The very nature of the reason - giving relation, however, can be seen to render such reductive projects unrealizable. This, therefore leaves causal theorists with the task of linking intentional and non - intentional levels of description in such a way as to accommodate intentional causality, without either over - determination or a miraculous coincidence of prediction from within distinct causally explanatorial frameworks.
 The existence of such causal links could well be written into the minimal core of rational transitions required for the ascription of the contents in question. Yet, it is one thing to agree that the ascription of content involves a species of rational intelligibility. It is another to provide an explanation of this fact. There are competing explanations. One treatment regards rational intelligibility as ultimately dependent on or upon what we find intelligible, or on what we could come to find intelligible in suitable circumstances. This is an analogue of classical treatments of secondary qualities, and as such is a form of subjectivism about content. An alternative position regards the particular conditions for correct ascription of given contents as more fundamental. This alternative states that interpretation must respect these particular conditions. In the case of conceptual contents, this alternative could be developed in tandem with the view that concepts are individuated by the conditions for possessing them. These possession conditions would then function as constraints upon correct interpretation. If such a theorist also assigns references to concepts in such a way that the minimal rational transitions are also always truth - preserving, he will also have succeeded in explaining why such transitions are correct. Under an approach that treats conditions for attribution as fundamental, intelligibility need not be treated as a subjective property. There may be concepts we could never grasp because of our intellectual limitations, as there will be concepts that members of other species could not grasp. Such concepts have their possession conditions, but some thinkers could not satisfy those conditions.
 Ascribing states with content to an actual person has to proceed simultaneously with attribution of a wide range of non - rational states and capacities. In general, we cannot understand a person’s reasons for acting as he does without knowing the array of emotions and sensations to which he is subject: What he remembers and what he forgets, and how he reasons beyond the confines of minimal rationality. Even the content  - involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. A perception of the world as being a certain way is not (and could not be) under a subject’s rational control. Though it is true and important that perceptions give reasons for forming beliefs, the beliefs for which they fundamentally provide reasons - observational beliefs about the environment - have contents which can only be elucidated by referencing back to perceptual experience. In this respect (as in others) perceptual states differ from those beliefs and desires that are individuated by mentioning what they provide reasons for judging or doing: For frequently these latter judgements and actions can be individuated without reference back to the states that provide reasons for them.
 What is the significance for theories of content of the fact that it is almost certainly adaptive for members of a species to have a system of states with representational contents which are capable of influencing their actions appropriately? According to teleological theories of content, a constitutive account of content - one which says what it is for a state to have a given content - must make use of the notion of natural function and teleology. The intuitive idea is that for a belief state to have a given content ‘p’ is for the belief - forming mechanisms which produced it to have the function b(perhaps derivatively) of producing that state only when it is the case that ‘p’. One issue this approach must tackle is whether it is really capable of associating with states the classical, realistic, verification - transcendent contents which pre - theoretically, we attribute to hem. It is not clear that a content’s holding unknowably can influence the replication of belief - forming mechanics. Bu t even if content itself proves to resist elucidation in terms of natural function and selection. It is still a very attractive view that selection must be mentioned in an account of what associate ss something - such as sentence - with a particular content, even though that content itself may be individuated by other means.
 Contents are normally specified by ‘that . . . ‘ clauses, and it is natural to suppose that a content has the same kind of sequential and hierarchical structure as the sentence that specifies it. This supposition would be widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of perceptual content is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver’s must fall if the experience with that content is to represent the environment correctly. The type involves a specification of surfaces and features in the environment, and their distances are directed from the perceiver’s body as origin. Such contents lack any sentence - like structure at all. Supporters of the view that all content is conceptual will argue that the legitimacy of using these spatial type in giving the content of experience does not undermine the thesis that all content is conceptual. Such supporters will say that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question. Friends of non - conceptual content will respond that these demonstratives themselves cannot be elucidated without mentioning the spatial types for which lack sentence - like structure.
 The actions made rational by content - involving states are actions individuated in part by reference to the agent’s relations to things and properties in his environment. Wanting to see a particular movie and believing that, that building over thee is a cinema showing it makes rational the action of walking in the direction of that building. Similarly, for the fundamental casse of a subject who has knowledge about his environment, a crucial factor in making rational the formations of particular attitude is the way the world is around him. One may expect, the n, that any theory that links the attribution of contents to states with rational intelligibility will be commit to the thesis that the content of a person’s states depends in part on his relations to the world outside him. We call this thesis the thesis of externalism about content.
 Externalism about content should steer a middle course. On the one had, it should not ignore the truism that the relations of rational intelligibility involve not things and properties in the world, but the way they are presented as being - an externalist should use some version of Frége’s notion of mode of presentation. On the other hand, the externalist for whom considerations of rational intelligibility are pertinent to the individuation of content is likely to insist that we cannot dispense with the notion of something in the world - being presented in a certain way. If we dispense with the notion of something external bing presented in a certain way, we are in danger of regarding attributions of content as having no consequence for how an individual relates to his environment, in a way that is quite contrary to our intuitive understanding of rational intelligibility.
 Externalism comes in more and fewer extreme versions. Consider a mind of a thinker who sees or perceives of a particular pear, and thinks a thought that the pear is ripe, where the demonstrative way of thinking of the pear expressed by ‘that pear’ is made available to him by his perceiving the pear. Some philosophers have held that the thinker would be employed of thinking were he perceiving a different perceptually based way of thinking were he perceiving a different pear. But externalism need not be committed to this. In the perceptual state that makes available the way on thinking pear is presented as being in a particular distance, and as having certain properties. A position will still be externalist if it holds that what is involved in the pear’s being so presented is the collective role of these components of content in making intelligible in various circumstances the subject’s relations to environmental directions distance and properties of object. This can be held without committed to the object - dependence of the way of thinking expressed by ‘that pear’. This less strenuous form of externalism must, though, address the epistemological arguments offered in favour of the more extreme versions, to the effect that only they are sufficiently world - involving.
 The apparent dependence of the content of belief on factors external to the subject can be formulated as a failure of supervenience of belief content upon facts about what is the case within the boundaries of the subject’s body. To claim that such supervenience fails is to make a model claim: That there can be two persons the same in respect of their internal physical states (and so in respect to those of their dispositions that are independent of content - involving states), who nevertheless differ in respect of which beliefs they have. Hilary Putnam (1926 - ), the American philosopher of science, who became more prominent in his writing about ‘Reason, Truth, and History’ (1981) marked of a subtle position that he call’s internal realism, initially related to a n ideal limit theory of truth, and apparently maintaining affinities with verificationism, but in subsequent work more closely aligned with minimalism. Putnam’s concern in the later period has largely been to deny any serious asymmetry between truth and knowledge as obtained in moral s, and even theology.
 Nonetheless, in the case of content - involving perceptual states. It is a much more delicate matter to argue for the failure of supervenience. The fundamental reason for this is answerable not only to factors ion the input side - what in certain fundamental cases causing the subject to be in the perceptual state - but also to factors on the perceptual state - but also to factors on the output side - what the perceptual state is capable of helping to explain amongst the subject’s actions. If differences in perceptual content always involve differences in bodily - described actions in suitable counter - factual circumstances, and if these different actions always will after all be supervenience of content - involving perceptual states on internal states. But if this should turn ut to be so, that is not a refutation of externalism for perceptual contents. A different reaction to this situation of dependence ads one of supervenience is in some cases too strong. A better is given by a constitutive claim: That what makes a state have the content it does are certain of its complex relations to external states of affairs. This can be held without commitment to the model separability of certain internal states from content - involving perceptual states.
 Attractive as externalism about content ma be, it has been vigorously contested notably by the American philosopher of mind Jerry Alan Fodor (1935 - ), who is known for a resolute realism about the nature of mental functioning. Taking the analogy between thought and computation seriously, Fodor believes that mental representations should be conceived as individual states with their own identities and structure, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘Holist’ such as Herbert Donald Davidson (1917 - 2003), although Davidson is a defender of the doctrines of the ‘indeterminacy’ of radical translation and the ‘inscrutability’ of reference, his approach has seemed to many to offer some hope of identifying meaning as a respectable notion, even within a broadly ‘extensional’ approach to language. Davidson is also known for rejection of the idea of a ‘conceptual scheme’, thought of as something peculiar to one language or in one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate. Nevertheless, Fodor (1981) endorses the importance of explanation by content - involving states, but holds that content must be narrow, constituted by internal properties of an individual.
 One influential motivation for narrow content is a doctrine about explanation that molecule - for - molecule counter - parts must have the same causal powers. Externalists have replied that the attributions of content - involving states presuppose some normal background or context for the subject of the states, and that content - involving explanations commonly take the presupposed background for granted. Molecular counter - parts can have different presuppose d backgrounds, and their content - involving states may correspondingly differ. Presupposition of a background of external relations in which something stands is found in other sciences outside those that employ the notion of content, including astronomy and geology.
 A more specific concern of those sympathetic to narrow content is that when content is externally individuated, the explanatorial principles postulated in which content - involving states feature will be a priori in some way that is illegitimate. For instance, it appears to be a priori that behaviour is intentional under some description involving the concept ‘water’ will be explained by mental states that have the externally individuated concept about ‘water’ in their content.  The externalist about content will have a twofold response. First, explanations in which content - involving states are implicated will also include explanations of the subject’s standing in a particular relation to the stuff water itself, and for many such relations, it is in no way a priori that the thinker’s so standing has a psychological explanation at all. Some such cases will be fundamental to the ascription of externalist content on treatments that tie such content to the rational intelligibility of actions relationally characterized. Second, there are other cases in which the identification of a theoretically postulated state in terms of its relations generates a priori truths, quite consistently with that state playing a role in explanation. It arguably is phenotypical characteristic, then it plays a causal role in the production of that characteristic in members of the species in question. Far from being incompatible with a claim about explanation, the characterization of genes that would make this a priori also requires genes to have a certain casual explanatory role.
 Of anything, it is the friend of narrow content who has difficulty accommodating the nature content are fit to explain bodily movements in environment - involving terms. But we note, that the characteristic explananda of content - involving states, such as walking towards the cinema, are characterized in environment - involving terms. How is the theorist of narrow content to accommodate this fact? He may say, that we merely need to add a description of the context of the bodily movement, which ensures that the movement is in fact a movement toward the cinema. But mental property of an event to an explanation of that event does not give one an explanation of the event’s having that environmental property, let alone a content - involving explanation of the fact. The bodily movement may also be a walking in the direction of Moscow, but it does not follow that we have a rationally intelligible explanation of the event as a walking in the direction of Moscow. Perhaps the theorist of narrow content would at this point add further relational proprieties of the internal states of such a kind that when his explanation is fully supplemented, it sustains the same counter - factuals and predications as does the explanation that mentions externally individuated content. But such a fully supplemented explanation is not really in competition with the externalist’s account. It begins to appear that if such extensive supplementation is adequate to capture the relational explananda it is also sufficient to ensure that the subject is in states with externally individuated contents. This problem, however, affects not only treatments of content as narrow, but any attempt to reduce explanation by content - involving states to explanation by neurophysiological states.
 One of the tasks of a sub - personal computational psychology is to explain how individuals come to have beliefs, desires, perceptions and other personal - level content - involving properties. If the content of personal - level states is externally individuated, then the contents mentioned in the sub - personal psychology that is explanatory of those personal states must also be externally individuated. One cannot fully explain the presence of an externally individuated state by citing only states that are internally individuated. On an externalist conception of sub - personal psychology, a content - involving computation commonly consists in the explanation of some externally individuated states by other externally individuated states.
 This view of sub - personal content has, though, to be reconciled with the fact that the first states in an organism involved in the explanation - retinal states in the case of humans - are not externally individuated. The reconciliation is affected by the presupposed normal background, whose importance to the understanding of content we have already emphasized. An internally individuated state, when taken together with a presupposed external background, can explain the occurrence of an externally individuated state.
 An externalist approach to sub - personal content also has the virtue of providing a satisfying explanation of why certain personal - level states are reliably correct in normal circumstances. If the sub - personal computations that cause the subject to be in such states are reliably correct, and the final commutation is of the content of the personal - level state, then the personal - level state will be reliably correct. A similar point applies to reliable errors, too, of course. In either case, the attribution of correctness condition to the sub - personal state is essentially to the explanation.
 Externalism generates its own set of issues that need resolution, notably in the epistemology of attributions. A content - involving state may be externally individuated, but a thinker does not need to check on his relations to his environment to know the content of his beliefs, desires, and perceptions. How can this be? A thinker’s judgements about his beliefs are rationally responsive to his own conscious beliefs. It is a first step to note that a thinker’s beliefs about his own beliefs will then inherit certain sensitivities to his environment that are present in his original (first - order) beliefs. But this is only the first step, for many important questions remain. How can there be conscious externally individuated states at all? Is it legitimate to infer from the content of one’s states to certain general facts about one’s environment, and if so, how, and under what circumstances?
 Ascription of attitudes to others also needs further work on the externalist treatment. In order knowledgeably to ascribe a particular content - involving attitude to another person, we certainly do not need to have explicit knowledge e of the external relations required for correct attribution of the attitude. How then do we manage it? Do we have tacit knowledge of the relation on which content depends, or do we in some way take our own case as primary, and think of the relations as whatever underlies certain of our own content - involving states? In the latter, in what wider view of other - ascription should this point be embedded? Resolution of these issues, like so much else in the theory of content, should provide us with some understanding of the conception each one has of himself as one mind amongst many, interacting with a common world which provides the anchor for the ascription of content.
 There seems to have the quality of being an understandably comprehensive  characteristic as ‘thought’, attributes the features of ‘intentionality’ or ‘content’: In thinking, as one thinks about certain things, and one thinks certain things about those things - one entertains propositions that maintain a position as promptly categorized for the states of affairs. Nearly all the interesting properties of thoughts depend upon their ‘content’: Their being coherent or incoherent, disturbing or reassuring, revolutionary or banal, connected logically or illogically to other thoughts. It is thus, hard to see why we would bother to talk of thought at all unless we were also prepared to recognize  the intentionality of thought. So we are naturally curious about the nature of content: We want to understand what makes it possible, what constitutes it, what it stems from. To have a theory of thought is to have a theory of its content.
 Four issues have dominated recent thinking about the content of thought, each may be construed as a question about what thought depends on, and about the consequences of its so depending (or not depending). These potential dependencies concern: (1) The world outside of the thinker himself, (2) language, (3) logical truth (4) consciousness. In each casse the question is whether intentionality is essentially or accidentally related to the items mentioned: Does it exist, that is, only by courtesy of the dependence of thought on the aid items? And this question determining what the intrinsic nature of thought is.
 Thoughts are obviously about things in the world, but it is a further question whether they could exist and have the content they do whether or not their putative objects themselves exist. Is what I think intrinsically dependent on or upon the world in which I happen to think it? This question was given impetus and definition by a thought experiment due to Hilary Putnam, concerning a planet called ‘twin earth’. On twin earth there live thinkers who are duplicates of us in all internal respects but whose surrounding environment contain different kinds of natural objects. The suggestion then is that what these thinkers refer to and think about is individuality dependent upon their actual environment, so that where we think about cats when we say ‘cat’ they think about that word - the different species that actually sits on their mats and so on. The key point is that since it is impossible to individuate natural kinds like cats solely by reference to the way they strike the people who think about them cannot be a function simply of internal properties of the thinker. The content, here, is relational in nature, is fixed by external facts as they bear upon the thinker. Much the same point can be made by considering repeated demonstrative reference to distinct particular objects: What I refer to when I say ‘that bomb’, of different bombs, depends on or upon the particular bomb in front of me and cannot be deduced from what is going on inside me. Context contributes to content.
 Inspired by such examples, many philosophers have adopted an ‘externalist’ view of thought content: Thoughts are not antonymous states of the individual, capable of transcending the contingent facts of the surrounding world. One is therefore not free to think whatever one’s liking, as it was, whether or not the world beyond cooperates in containing suitable referents for those thoughts. And this conclusion has generated a number of consequential questions. Can we know our thoughts with special authority, given that they are thus hostage to external circumstances? How do thoughts cause other thoughts and behaviour, given that they are not identical with an internal states we are in? What kind of explanation are we giving when we cite thoughts? Can there be a science of thought if content does not generalize across environments? These questions have received many different answers, and, of course, not everyone agrees that thought has the kind of world - dependence claimed. Nonetheless, what has not been considered carefully enough, is the scope of the externalist thesis - whether it applies to all forms of thought, all concepts. For unless this questions be answered affirmatively we cannot rule out the possibility that though in general depends on there being some thought that is purely internally determined, so that the externally fixed thoughts are a secondary phenomenon. What about thoughts concerning one’s present sensory experience, or logical thoughts or ethical thought? Could there, indeed, be a thinker for whom internalism was generally correct? Is external individuation the rule or the exception? And might it take the rule or the exception? And might it take different forms in different cases?
 Since words are also about things, it is natural to ask how their intentionality is connected to that of thoughts. Two views have been advocated: One view takes thought content to be self - subsisting relative to linguistic content, with the latter dependent upon the former: the other view takes thought comment to be derivative upon linguistic content, so that there can be no thought without a bedrock of language. Thus, arise controversies about whether animals really think, being non - speakers, or computers really use language. , being non - thinkers. All such question depend critically upon what one is to mean by ‘language’. Some hold that spoken language is unnecessary for thought but that there must be an inner language in order for thought to be possible, while others reject the very idea of an inner language, preferring to suspend thought from outer speech. However, it is not entirely clear what it amounts to assert (or deny)that there is an inner language of thought. If it means merely that concepts (thought constituents) are structured in such a way as to be isomorphic with spoken language, then the claim is trivially true, given some natural assumptions. But if it means that concepts just are ‘syntactic’ items orchestrated into springs of the same, then the claim is acceptable only in so far as syntax is an adequate basis for meaning - which, on the face of it, it is not. Concepts no doubt have combinatorial powers compactable to those of words, but the question is whether anything else can plausible be meant by the hypothesis of an inner language.
 On the other hand, it appears undeniable that spoken language does have autonomous intentionality, but instead derives its meaning from the thought of speakers - though language may augment one’s conceptual capacities. So thought cannot postdate spoken language. The truth seems to be that in human psychology speech and thought are interdependent in many ways, but there is no conceptual necessity about this. The only ‘language’ on which thought essentially depends itself: Thought indeed, depends upon there being insoluble concepts that can join with others to produce complete propositional statements. But this is merely to draw attention to a property any system of concepts must have: It is not to say what concepts are or how they succeed in moving between thoughts as they so. Appeals to language at this point, are apt to flounder on circularity, since words take on the power of concepts only insofar as they express them. Thus, there seems little philosophical illumination to be got from making thought depend on or upon language.
 This third dependency question is prompted by the reflection that, while people are no doubt often irrational, woefully so, there seems to be sme kind of intrinsic limit to their unreason. Even the sloppiest thinker will not infer anything from anything. To do so is a sign of madness The question then is what grounds this apparent concession to logical prescription. Whereby, the hold of logic over thought? For the dependence there can seem puzzling: Why should the natural causal processes relations of logic, I am free to flout the moral law to any degree I desire, but my freedom to think unreasonably appears to encounter an obstacle in the requirement of logic? My thoughts are sensitive to logical truth in somewhat the way they  are sensitive to the world surrounding me: They have not the independence of what lies outside my will or self that I fondly imagined. I may try to reason contrary to modus ponens, but my efforts will be systematically frustrated. Pure logic takes possession of my reasoning processes and steers them according to its own indicates, variably, of course, but in a systematic way that seems perplexing.
 One view of tis is that ascriptions of thought are not attempts to map a realm of independent causal relations, which might then conceivably come apart from logical relations, but are rather just a useful method of summing up people’s behaviours. Another view insists that we must acknowledge that thought is not a natural phenomenon in the way merely, and physical facts are: Thoughts are inherently normative in their nature, so that logical relations constitute their inner essence. Thought incorporates logic in somewhat the way externalists say it incorporates the world. Accordingly, the study of thought cannot be a natural science in the way the study of (say) chemistry compounds is. Whether this view is acceptable, depends upon whether we can make sense of the idea that transitions in nature, such as reasoning appear to be, can also be transitions in logical space, i.e., be confined by the structure of that space. What must be thought, in such that this combination n of features is possible. Put differently, what is it for logical truth to be self - evident?
 This dependency question has been studied less intensively than the previous three. The question is whether intentionality ids dependent on or upon consciousness for its very existence, and if so why. Could our thoughts have the very content they now have if we were not to be consciousness beings at all? Unfortunately, it is difficult to see how to mount an argument in either direction. On one hand, it can hardly be an accident that our thoughts are conscious and that this content is reflected in the intrinsic condition of our state of consciousness: It is not as if consciousness leaves off where thought content begins - as it does with, say, the neural basis of thought. Yet, on the other hand, it is by no means clear what it is about consciousness that links it to intentionality in this way. Much of the trouble here stems from our exceedingly poor understanding of the nature of consciousness could arise from grain tissue (the mind - body problem), so that we fill to grasp the manner in which conscious states bear meaning. Perhaps content is fixed by extra - conscious properties and relations and only subsequently shows up in consciousness, as various naturalistic reductive accounts would suggest; Or perhaps, consciousness itself plays a more enabling role, allowing meaning to come into the word, hard as this may be to penetrate. In some ways the question is analogous to, say, the properties of ‘pain’: Is the aversive property of pain, causing avoidance behaviour and so forth, essentially independent of the conscious state of feeling, or is it that pain, could only have its aversion function in virtue of the conscious feedings? This is part of the more general question of the epiphenomenal character of consciousness: Is conscious awareness just a dispensable accompaniment of some mental feature - such as content or causal power - or is it that consciousness is structurally involved in the very determination of the feature? It is only too easy to feel pulled in both directions on this question, neither alterative being utterly felicitous. Some theorists, suspect that our uncertainty over such questions stems from a constitutional limitation to human understanding. We just cannot develop the necessary theoretical tools which to provide answers to these questions, so we may not in principle be able to make any progress with the issue of whether thought depends upon consciousness and why. Certainly our present understanding falls far short of providing us with any clear route into the question.
 It is extremely tempting to picture thought as some kind of inscription in a mental medium and of reasoning as a temporal sequence of such inscriptions. On this picture all that a particulars thought requires in order to exist is that the medium in question should be impressed with the right inscription. This makes thought independent of anything else. On some views the medium is conceived as consciousness itself, so that thought depends on consciousness as writing depends on paper and ink. But ever since Wittgenstein wrote, we have seen that this conception of thought has to be mistaken, in particular of intentionality. The definitive characteristics of thought cannot be captured within this model. Thus, it cannot make room for the idea of intrinsic world - dependence. Since any inner inscription would be individualatively independent of items outside the putative medium of thought. Nor can it be made to square with the dependence of thought on logical pattens, since the medium could be configured in any way permitted by its intrinsic nature, within regard for logical truth - as sentences can be written down in any old order one likes. And it misconstrues the relation between thought and consciousness, since content cannot consist in marks on the surface of consciousness, so to speak. States of consciousness do contain particular meanings but not as a page contains sentences: The medium conception of the relation between content and consciousness is thus deeply mistaken. The only way to make meaning enter internally into consciousness is to deny that it as a medium for meaning to be expressed. However, it is marked and noted as the difficulty to form an adequate conception of how consciousness does carry content - one puzzle being how the external determinants of content find their way into the fabric of consciousness.
 Only the alleged dependence of thought upon language fits the naïve tempting inscriptional picture, but as we have attested to, this idea tends to crumble under examination. The indicated conclusion seems to be that we simply do not posses a conception of thought that makes  its real nature theoretically comprehensible: Which is to say, that we have no adequate conception of mind? Once we form a conception of thought that makes it seem unmysterious as with the inscriptional picture. It turns out to have no room for content as it presents itself: While building in a content as it is leaves’ us with no clear picture of what could have such content. Thought is ‘real’, then, if and only if it is mysterious.
 In the philosophy of mind ‘epiphenomenalism’ means that while there exist mental events, states of consciousness, and experience, they have themselves no causal powers, and produce no effect on the physical world. The analogy sometimes used is that of the whistle on the engine that makes the sound (corresponding to experiences), but plays no part in making the machinery move. Epiphenomenalism is a drastic solution to the major difficulties the existence of mind with the fact that according to physics itself only a physical event can cause another physical event an epiphenomenalism may accept one - way causation, whereby physical events produce mental events, or may prefer some kind of parallelism, avoiding causation either between mind and body or between body and mind. And yet, occasionalism considers the view that reserves causal efficacy to the action of God. Events in the world merely form occasions on which God acts so as to bring about the events normally accompanying them, and thought of as their effects. Although, the position is associated especially with the French Cartesian philosopher Nicolas Malebranche (1638 - 1715), inheriting the Cartesian view that pure sensation has no representative power, and so adds the doctrine that knowledge of objects requires other representative ideas that are somehow surrogates for external objects. These are archetypes of ideas of objects as they exist in the mind of God, so that ‘we see all things in God’. In the philosophy of mind, the difficulty to seeing how mind and body can interact suggests that we ought instead to think of hem as two systems running in parallel. When I stub my toe, this does so cause pain, but there is a harmony between the mental and the physical (perhaps due yo God) that ensures that there will be a simultaneous pain, when I form an intention and then act, the same benevolence ensures that my action is appropriated to my intention. The theory has never been wildly popular, and many philosophers would say that it was the result of a misconceived ‘Cartesian dualism’. Nonetheless, a major problem for epiphenomenalism is that if mental events have no causal relationship it is not clear that they can be objects of memory, or even awareness.
 The metaphor used by the founder of revolutionary communism, Karl Marx (1805 - 1900) and the German social philosopher and collaborator of Marx, Friedrich Engels (1820 - 95), to characterize the relation between the economic organization of society, which is its base, an the political, legal, and cultural organizations and social consciousness of a society, which is the super - structure. The sum total of the relations of production of material life conditions the social political, and intellectual life process in general. The way in which the base determines of much debate with writers from Engels onwards concerned to distance themselves from that the metaphor might suggest. It has also in production are not merely economic, but involve political and ideological relations. The view that all causal power is centred in the base, with everything in the super - structure merely epiphenomenal. Is sometimes called economicism? The problems are strikingly similar to those that are arisen when the mental is regarded as supervenience upon the physical, and it is then disputed whether this takes all causal power away from mental properties.
 Just the same, for if, as the causal theory of action implies, intentional action requires that a desire for something and a belief about how to obtain what one desires play a causal role in producing behaviour, then, if epiphenomenalism is true, we cannot perform intentional actions. Nonetheless, in describing events that happen does not of itself permit us to talk of rationality and intention, which are the categories we may apply if we conceive of them as actions. Ewe think of ourselves not only passively, as creatures within which things happen, but actively, as creatures that make things happen. Understanding this distinction gives rise to major problems concerning the nature of agency, of the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between the structures involved when we do one thing ‘by’ doing another thing. Even the placing and dating of action can give ruse to puzzles, as one day and in one place, and the victim then dies on another day and in another place. Where and when did the murder take place? The notion of applicability inherits all the problems of ‘intentionality’. The specific problems it raises include characterizing the difference between doing something accidentally and doing it intentionally. The suggestion that the difference lies in a preceding act of mind or volition is not very happy, since one may automatically do what is nevertheless intensional, for example, putting one’s foot forwards while walking. Conversely, unless the formation of a volition is intentional, and thus raises the same questions, the presence of a violation might be unintentional or beyond one’s control. Intentions are more finely grained than movements, one set of movements may both be answering the question and starting a war, yet the one may be intentional and the other not.
 However, according to the traditional doctrine of epiphenomenalism, things are not as they seem: In reality, mental phenomena can have no causal effects: They are casually inert, causally impotent. Only physical phenomena are casually efficacious. Mental phenomena are caused by physical phenomena, but they cannot cause anything. In short, mental phenomena are epiphenomenal.
 The epiphenomenalist claims that mental phenomena seem to be causes only because there are regularities that involve types (or kinds) of mental phenomena. For example, instances of a certain mental type ‘M’, e.g., trying to raise one’s arm might tend to be followed by instances of a physical type ‘P’, e.g., one’s arms rising. To infer that instances of ‘M’ tend to cause instances of ‘P’ would be, however, to commit the fallacy of post hoc, ergo propter hoc. Instances of ‘M’ cannot cause instances of ‘P’: Such causal transactions are casually impossible. P - typ e events tend to be followed by M - type events because instances of such events are dual - effects of common physical causes, not because such instances causally interact. Mental events and states can figure in the web of causal relations only as effects, never as causes.
 Epiphenomenalism is a truly stunning doctrine. If it is true, then no pain could ever be a cause of our wincing, nor could something’s looking red to us ever be a cause of our thinking that it is red. A nagging headache could never be a cause of a bad mood. Moreover, if the causal theory of memory is correct, then, given epiphenomenalism, we could never remember our prior thoughts, or an emotion we once felt, or a toothache we once had, or having heard someone say something, or having seen something: For such mental states and events could not be causes of memories. Furthermore, epiphenomenalism is arguably incompatible with the possibility of intentional action. For if, s the casual theory of action implies, intentional action requires that a desire for something and a belief about how to obtain what one desires lay a causal role in producing behaviour, then, if epiphenomenalism is true, we cannot perform intentional actions. As it strands, to accommodate this point - most obviously, specifying the circumstances in which belief - desire explanations are to be deployed. However, matter are not as simple as the seem. Ion the functionalist theory, beliefs are casual functions from desires to action. This creates a problem, because all of the different modes of psychological explanation appeal to states that fulfill a similar causal function from desire to action. Of course, it is open to a defender of the functionalist approach to say that it is strictly called for beliefs, and not, for example, innate releasing mechanisms, that interact with desires in a way that generates actions. Nonetheless, this sort of response is of limited effectiveness unless some sort of reason - giving for distinguishing between a state of hunger and a desire for food. It is no use, in that it is simply to describe desires as functions from belief to actions.
 Of course, to say the functionalist theory of belief needs to be expanded is not to say that it needs to be expanded along non - functionalist lines.  Nothing that has been said out the possibility that a correct and adequate account of what distinguishes beliefs from non - intentional psychological states can be given purely in terms of respective functional roles. The core of the functionalist theory of self - reference is the thought that agents can have subjective beliefs that do not involve any internal representation of the self, linguistic or non - linguistic. It is in virtue of this that the functionalist theory claim to be able to dissolve such the paradox. The problem that has emerged, however, is that it remains unclear whether those putative subjective beliefs really are beliefs. Its thesis, according to which all cases of action to be explained in terms of belief - desire psychology have to be explained through the attribution of beliefs. The thesis is clearly at work as causally given to the utility conditions, and hence truth conditions, of the belief that causes the hungry creature  facing food to eat what I in front of him - thus, determining the content of the belief to be. ‘There is food in front of me’, or ‘I am facing food’. The problem, however, is that it is not clear that this is warranted. Chances would explain by the animal would eat what is in front of it. Nonetheless, the animal of difference, does implicate different thoughts, only one of which is a purely directive genuine thought.
 Now, the content of the belief that the functionalist theory demands that we ascribe to an animal facing food is ‘I am facing food now’ or ‘There is food in front of me now’. These are, it seems clear, structured thoughts, so too, for that matter, is the indexical thought ‘There is food here now’. The crucial point, however, is that the casual function from desires to actions, which, in itself, is all that a subjective belief is, would be equally well served by the unstructured thought ‘Food’.
 At the heart of the reason - giving relation is a normative claim. An agent has a reason for believing, acting and so forth. If, given here to other psychological states this belief/action is justified or appropriate. Displaying someone’s reasons consist in making clear this justificatory link. Paradigmatically, the psychological states that prove an agent with logical states that provide an agent with treason are intentional states individuated in terms of their propositional content. There is a long tradition that emphasizes that the reason - giving relation is a logical or conceptual representation. In the case of reason for actions the premises of any reasoning are provided by intentional states other than belief.
 Notice that we cannot then, assert that epiphenomenalism is true, if it is, since an assertion is an intentional speech act. Still further, if epiphenomenalism is true, then our sense that we are enabled is true, then our sense that we are agents who can act on our intentions and carry out our purposes is illusory. We are actually passive bystanders, never the agent in no relevant sense is what happens up to us. Our sense of partial causal control over our exert no causal control over even the direction of our attention. Finally, suppose that reasoning is a causal process. Then, if epiphenomenalism is true, we never reason: For there are no mental causal processes. While one thought may follow anther, one thought never leads to another. Indeed, while thoughts may occur, we do not engage in the activity of thinking. How, the, could we make inferences that commit the fallacy of post hoc, ergo propter hoc, or make any inferences at all for that matter?
 As neurophysiological research began to develop in earnest during the latter half of the nineteenth century. It seemed to find no mental influence on what happens in the brain. While it was recognized that neurophysiological events do not by themselves casually determine other neurophysiological events, there seemed to be no ‘gaps’ in neurophysiological causal mechanisms that could be filled by mental occurrences. Neurophysiological appeared to have no need of the hypothesis that there are mental events. (Here and hereafter, unless indicated otherwise, ‘events’ in the broadest sense  will include states as well as changes.) This ‘no gap’ line of argument led some theorists to deny that mental events have any casual effects. They reasoned as follows: If mental events have any effects, among their effects would be neurophysiological ones: Mental events have no neurophysiological effects: Thus, mental events have no effect at all. The relationship between mental phenomena and neurophysiological mechanisms is likened to that between the steam - whistle which accompanies the working of a locomotive engine and the mechanisms of the engine, just as the steam - whistle which accompanies the working of a locomotive engine and the mechanisms of the engine: just as the steam - whistle is an effect of the operations of the mechanisms but has no casual influence on those operations, so too mental phenomena are effects of the workings of neurophysiological mechanisms, but have no causal influence on their operations. (The analogy quickly breaks down, as steam whistles have casual effects but the epiphenomenalist alleges that mental phenomenons have no causal effects at all.)
 An early response to this ‘no gap’ line of argument was that mental events (and states) are not changes in (and states of) an immaterial Cartesian substance e, they are, rather changes in (and states of) the brain. While mental properties or kinds are not neurophysiological properties or kinds, nevertheless, particular mental events are neurophysiological events. According to the view in question, a given events can be an instance of both a neurophysiological type and a mental type, and thus be both a mental event and a neurophysiological event. (Compare the fact that an object might be an instance of more than one kind of object: For example, an object might be both a stone and a paper - weight.) It was held, moreover, that mental events have causal effects because they are neurophysiological events with causal effects. This response presupposes that causation is an ‘extensional’ relation between particular events that if two events are causally related, they are so related however they are typed (or described). Given that assumption is today widely held. And given that the causal relation is extensional, if particular mental events are indeed, neurophysiological events are causes, and epiphenomenalism is thus false.
 This response to the ‘no gap’ argument, however, prompts a concern about the relevance of mental properties or kinds to causal relations. And in 1925 C.D. Broad tells us that the view that mental events are epiphenomenal is the view that mental events either (a) do not function at all as causal - factors, or hat (b) if they do, they do so in virtue of their physiological characteristics and not in virtue of their mental characteristics. If particular mental events are physiological events with causal effects, then mental events function as case - factors: They are causes, however, the question still remains whether mental events are causes in virtue of their mental characteristics. , yet, neurophysiological occurrences without postulating mental characteristics. This prompts the concern that even if mental events are causes, they may be causes in virtue of their physiological characteristics. But not in virtue of their mental characteristics.
 This concern presupposes, of course, that events are causes in virtue of certain of their characteristics or properties. But it is today fairly widely held that when two events are causally related, they are so related in virtue of something about each. Indeed, theories of causation assume that if two events ‘x’ and ‘y’ are causally related, and two other events ‘a’ and ‘b’ are not, then there must be some difference between ‘x’ and ‘y’ and ‘a’ and ‘b’ in virtue of which ‘x’ and ‘y’ are. But ‘a’ and ‘b’ are not, causally related. And they attempt to say what that difference is: That is, they attempt to say what it is about causally related events in virtue of which they are so related. For example, according to so - called ‘nomic subsumption views of causation’, causally related events will be so related in virtue of falling under types (or in virtue of having properties) that figure in a ‘causal law’. It should be noted that the assumption that casually related events are so related in virtue of something about each is compatible with the assumption that the causal relation is an ‘extensional’ relationship between particular events. The weighs - less - than relation is an extensional relation between particular objects: If O weighs less than O*, then O and O* are so related, have them of a typed (or characterized, or described, nevertheless, if O weighs less than O*, then that is so in virtue of something about each, namely their weights and the fact that the weight of one is less than the weight of the other. Examples are readily multiplied. Extensional relations between particulars typically hold in virtue of something about the particular. It is, nonetheless, that we will grant that when two events are causally related, they are so related in virtue of something about each.
 Invoking the distinction between types and tokens, and using the term ‘physical’, rather than the more specific term ‘physiological’. Of the following are two broad distinctions of epiphenomenalism:
  Token Epiphenomenalism: Mental events cannot cause anything.
  Type Epiphenomenalism: No event can cause anything in virtue of
  falling under a mental type.
So in saying. That property epiphenomenalism is the thesis that no event can cause anything in virtue of having a mental property. The conjunction of token epiphenomenalism and the claim those physical events cause mental events is, that, of course, the traditional doctrine of epiphenomenalism, as characterized earlier. Ton epiphenomenalism implies type epiphenomenalism, for if an event could cause something in virtue of falling under a mental type, then an event could be both epiphenomenalism would be false. Thus, if mental events cannot be causes, then events cannot be causes in virtue of falling under mental types. The denial of token epiphenomenalism does not, however, imply the denial of type epiphenomenalism, if a mental event can be a physical event that has causal effects. For, if so, then token epiphenomenalism may still be true. For it may be that events cannot be causes in virtue of falling under mental types. Mental events may be causes in virtue of falling under mental types. Thus, even if token epiphenomenalism is false, the question remains whether type epiphenomenalism is.
 Suppose, for the sake of argument, that type epiphenomenalism is true. Why would that be a concern if mental events are physical events with causal effects? In our assumption that the causal relation is extensional, it could be true, consist with type epiphenomenalism, that pains cause winces, that desires cause behaviour, that perceptual experience cause beliefs and mental states cause memories, and that reasoning processes are causal processes. Nevertheless, while perhaps not as disturbing a doctrine as token epiphenomenalism, type epiphenomenalism can, upon reflection, seen disturbing enough.
 Notice to begin with that ‘in virtue of’ expresses an explanatory relationship. In so doing, that ‘in virtue of’ is arguably a near synonym of the more common locution ‘because of’. But, in any case, the following seems true so as to be adequate: An event causes a G - event in virtue of being an F - event if and only if it causes a G - event because of being an F - event.’In virtue of’ implies ‘because of’, and in the case in question at least the implication seems to go in the other direction as well. Suffice it to note that were type epiphenomenalism consistent with its being the case that an event could have a certain effect because of falling under a certain mental type, then we would, indeed be owed an explanation of why it should be of any concern if type epiphenomenalism is true. We will, however, assume that type epiphenomenalism is inconsistent with that. We will assume that type epiphenomenalism could be reformulated as: No event can cause anything because of falling under a mental type. (And we will assume that property epiphenomenalism can be reformulated thus: No event can cause anything because of having a mental property.) To say that ‘a’ causes ‘b’ in virtue of being ‘F’ is too say that ‘a’ causes ‘b’ because of being ‘F’; that is, it is to say that it is because ‘a’ is ‘F’ that it causes ‘b’. So, understood, type epiphenomenalism is a disturbing doctrine indeed.
 If type epiphenomenalism is true, then it could never be the case that circumstances are such that it is because some event or states is a sharp pain, or a desire to flee, or a belief that danger is near, that it has a certain sort of effect. It could never be the case that it is because some state in a desire of ‘X’ (impress someone) and another is a belief that one can ‘X’ by doing ‘Y’ (standing on one’s head) that the states jointly result in one’s doing ‘Y’ (standing on one’s head). If type (property) epiphenomenalism is true, then nothing has any causal powers whatever in virtue of (because of) being an instance of a mental type, then, never be the case of a certain mental type that a state has the causal power in certain circumstances to provide some effect. For example, it could never the case that it is in virtue of being an urge to scratch (or a belief that danger is near) that a state has the causal power in certain circumstances to produce scratching behaviour (or fleeing behaviour) if type - epiphenomenalism is true, then the mental qua mental, so to speak, is casually impotent. That may very well seem disturbing enough.
 What reason is there, however, for holding type epiphenomenalism? Even if neurophysiology does not need to postulate types of mental events, perhaps the science of psychology does. Note that physics has no need to postulate types of neurophysiological events: But that may well not lead one tp doubt that an event can have effects in virtue of being (say) a neuron firing. Moreover, mental types figure in our every day, casual explanations of behaviour, intentional action, memory, and reasoning. What reason is there, then, for holding that events cannot have effects in virtue of being instances of mental types? This question naturally leads to the more general question of which event types are such that events have effects in virtue of falling under them. This more general question is best addressed after considering a ‘no gap’ line of argument that has emerged in recent years.
 Current physics includes quantum mechanics, a theory which appears able, in  principle, to explain how chemical processes unfold in terms of the mechanics of sub - atomic particles. Molecular biology seems able, in principle, to explain how the physiological operations of systems in living things in terms of biochemical pathways, long chains of chemical reactions. On the evidence, biological organisms are complex physical objects, made up of molecular particles (there are noo entelechies or élan vital). Since we are all biological organisms, the movements of our bodies and of their minute parts, including the chemicals in our brains, and so forth, are causally determined, too whatsoever subatomic particles and fields. Such considerations have inspired a lin e of argument that only events within the domain of physics are causes.
 Before presenting the argument, let us make some terminological stipulations: Let us henceforth use ‘physical events’ (states) and ’physical property’ in as strict and narrow sense to mean, respectfully, a type of event (state) physics (or, by some improved version of current physics). Event if they figure in laws of physics. Finally, by ‘a physical event (states) we will mean an even (state) that falls under a physical type. Only events within the domain of (current) physics (or, some improved eversion of current physics) count as physical in this strict and narrow sense.
Consider, then:
   The Token - Exclusion Thesis Only physical events can have
  causal effects (i.e., as a matter of causal necessity, only physical
  events have casual effects).
The premises of the basis argument for the token - exclusion thesis are:
   Physical Caudal Closure Only physical events can cause
  physical events.
   Causation by way of Physical Effects As a matter of at least
  casual necessity, an event is a cause of another event if and only if it
  is a cause of some physical event?
These principles jointly imply the exclusion thesis. The principle of causation through physical effects is supported on the empirical grounds that every event occurs within space - time, and by the principle that an event is a cause of an event that occurs within a given region of space - time if and only if it is a cause of some physical event that occurs within that region of space - time. The following claim is offered in support of physical closure:
   Physical causal Determination, For any (caused) physical
  event, ‘P’, there is a chain of entirely physical events leading to ‘P’,
  each link of which casually determines its successor.
(A qualification: If strict determinism is not true, then each link will determine the objective probability of its successor.) Physics is such that there is compelling empirical reason to believe that physical causal determination holds. Every physical event will have a sufficient physical cause. More precisely, there will be a deterministic casual chan of physical events leading to any physical event, ‘P’. Butt such links there will be, and such physical causal chains are entirely ‘gap - less’. Now, to be sure, physical casual determination does not imply physical causal closure, the former, but not the latter, is consistent with non - physical events causing physical events. However, a standard epiphenomenalist response to this is that such non - physical events would be, without exception, over - determining causes of physical events, and it is ad hoc are over - determining non - physical events. Nonetheless, a standard epiphenomenalist response of this is that such non - physical events would be, without exception, over - determining causes of physical events, and it is ad hoc to maintain that non - physical events are over - determining causes of physical events.
 Are mental events within the domain of physics? Perhaps, like objects, events can fall under many different types or kinds. We noted earlier that a given object might, for instance, be both a stone and a paper wight, however, we understand how a stone could be a paper - wight, but how, for instance could an event of subatomic particles and fields be a mental event? Suffice e it to note for a moment that if mental events are not within the domain of physics, then if the token - exclusion thesis is true, no mental event can ever cause anything: Token epiphenomenalism is true.
 One might reject the token - exclusion thesis, however, on the grounds that, typical events within the domains of the special sciences - chemistry, the life sciences, and so on - are not within the domain of physics, but nevertheless have causal effects. One might maintain that neuron firing, for instance, cause either neuron firing, even though neurophysiological events are not within the domain of physics. Rejecting the token - exclusion either, however, requires arguing either that physical causal closure is false or that the principle of causation by way of physical effects is.
 But one response to the ‘no - gap’ argument from physics is to reject physical casual closure. Recall that physical causal determination is consistent with non - physical events being over - determining causes of physical events. One might concede that it would be ad hoc to maintain that a non - physical event, ‘N’, is an over - determining cause of a physical event ‘P’, and that ‘N’ causes ‘P’ in a way that is independent of the causation of ‘P’ by other physical events. Nonetheless, ‘N’ can be a cause of another event, that ‘N’ can cause a physical event ‘P’ in a way that is dependent upon P’s being caused by physical events.  Again, one might argue that physical events ‘underlie’ non - physical events, and that a non - physical event ‘N’ can be a cause of anther event ‘X’ (physical or non - physical), in virtue of the physical event that ‘underlie’ ‘N’ being a cause of ‘X’.
 Another response is to deny the principle of causation through physical effects. Physical causal closure is consistent with non - physical events. One might concede physical causal closure but deny the principle of causation by way of physical effects, and argue that non - physical events cause other non - physical events without causing physical events. This would not require denying that (1) Physical events invariably ‘underlie’ non - physical events or that (2) Whenever a non - physical event causes another non - physical event, some physical event that underlies the first event causes a physical event that underlies the second. Clams of both tenets (1) and (2) do not imply the principle of causation through physical effects. Moreover, from the fac t that a physical event ‘P’, causes another physical event ‘P*’. It may not allow that ‘P’ causes every non - physical event that ‘P*’ underlies. That may not follow it the physical events that underlie non - physical events casually suffice for those non - physical events. It would follow from that, which for every non - physical event, there is a causally sufficient physical event. But it may be denied that causal sufficiency suffices for causation: It may be argued that there are further constraints on causation that can fail to be met by an event that causally suffices for another. Moreover, it ma be argued that given the further constraints, non - physical events are the causes of non - physical events.
 However, the most common response to the ‘no - gap’ argument from physics is to concede it, ad thus to embrace its conclusion, the token - exclusion these, but to maintain the doctrine of ‘token physicalism’, the doctrine that every event (state) is within the domain of physics. If special science events and mental events are within the domain of physics, then they can be causes consistent with the token - exclusion thesis.
 Now whether special science events and mental events are within the domain of physics depends, in part, on the nature of events, and that is a highly controversial topic about which there is nothing approaching a received view. The topic raises deep issues that are beyond the scope of this essay, yet the issues concerning the ‘essence’ of events and the relationship between causation and causal explanation, are in any case, . . .  suffice it to note here that it is believed that the sme fundamental issues concerning the causal efficacy of the mental arise for all the leading theories of the ‘relata’ of casual relation. The issues just ‘pop - up’ in different places. However, that cannot be argued at this time, and it will not be for us to be assumed.
 Since the token physicalism response to the no - gap argument from physics is the most popular response, is that special science events, and even mental events, are within the domain of physics. Of course, if mental events are within the domain of physics then, token epiphenomenalism can be false even if the token - exclusion is true: For mental events may be physical events which have causal effects.
 Nevertheless, concerns about the causal relevance of mental properties and event types would remain. Indeed, token physicalism together with a fairly uncontroversial assumption, naturally leads to the question of whether events can be causes only in virtue of falling under types postulated by physics. The assumption is that physics postulates a system of event types that has the following features:
   Physical Causal Comprehensiveness: When two physical
  events are causally related, they are so related in virtue of falling
  under physical types.
That thesis naturally invites the question of whether the following is true:
   The Type - Exclusion Thesis: An event can cause something
  only in virtue of falling under a physical type, i.e., a type
  postulated by physics.
The type - exclusion thesis offers one would - be answer to our earlier question of which effects types are such that events have effects in virtue of falling under them. If the answer is the correct one, it may, however, be in fact (if it is correct) that special science events and mental events are within the domain of physics will be cold comfort. For type physicalism, the thesis that every event type is a physical type, seems false. Mental types seem not to be physical types in our strict and narrow sense. No mental type, it seems, is necessarily coextensive (i.e., coextensive in every ‘possible world’) with any type postulated by physics. Given that, and given the type - exclusion thesis, type epiphenomenalism is true. However, typical special science types also fail to be necessarily coextensive with any physical types, and thus typical special science types fail to be physical types. Indeed, we individuate the sciences in part by the event (state) types they postulate. Given that typical special science types are not physical types (in our strict sense), then typical special science types are not such that even can have causal effects in virtue of falling under them.
 Besides, a neuron firing is not a type of event postulated by physics, given the type exclusion thesis, no event could ever have any causal effects in virtue of being a firing of a causal effect. The neurophysiological qua neurophysiological is causally impotent. Moreover, if things have casual powers only in virtue of their physical properties, then an HIV virus, qua HIV virus, does not have the causal power to contribute to depressing the immune system: For being an HIV virus is not a physical property (in our strict sense). Similarly, for the same reason the SALK vaccine, qua SALK vaccine, would not have the causal power to contribute to producing an immunity to polio. Furthermore, if, as it seems, phenotype properties are not physical properties, phenotypic properties do not endow organisms with casual powers conducive to survival. Having hands, for instance, could never endow nothing with casual powers conducive to survival since it could never endow anything with any causal powers whatsoever. But how, then, could phenotypic properties be units of natural selection? And if, as it seems, genotypes are not physical types, then, given the type exclusion thesis, genes do not have the causal power, qua genotypes, to transmit the genetic bases for phenotypes. How, then, could the role of genotypes as units of heredity be a causal role? There seem to be ample grounds for scepticism that any reason for holding the type - exclusion thesis could outweigh our reasons for rejecting it.
 We noted that the thesis of universal physical causal comprehensiveness or ‘upc - comprehensiveness’ for short, invites the question of whether the type - exclusion thesis is true. But does upc - comprehensiveness while rejecting the type - exclusion thesis?
 Notice that there is a crucial one - word difference between the two theses: The exclusion thesis contains the word ‘only’ in front of ‘in virtue of’, while thesis of upc - comprehensiveness does not. This difference is relevant because ‘in virtue of’ does not imply ‘only in virtue of’, I am a brother in virtue of being a male with a sister, but I am also a brother in virtue of being a male with a brother, and, of course, being a male with a brother, and conversely. Likewise, I live in the province of Ontario in virtue of living in the city of Toronto, but it is also true that I live in Canada in virtue of living in the County of York. Moreover, in the general case, if something ‘x’ bears a relation ‘R’, to something ‘y’ in virtue of x’s being ‘F’ and y’s being ‘G’. Suppose that ‘x’ weighs less than ‘y’ in virtue of x’s weighing lbs., and y’s weighing lbs. Then, it is also true that ‘x’ weighs less than ‘y’ in virtue of x’s weighing under lbs., and y’s weighing over lbs. And something can, of course, weigh under lbs., without weighing lbs. To repeat, ‘in virtue of’ does not imply ‘only in virtue of’.
 Why, then, think that upc - comprehensiveness implies the type - exclusion thesis? The fact that two events are causally related in virtue of falling under physical types does not seem to exclude the possibility that they are also causally related in virtue of falling under non - physical types, in virtue of the one being (say) a firing of a certain other neuron, or in virtue of one being a secretion of enzymes and the other being a breakdown of amino acids. Notice that the thesis of upc - comprehensiveness implies that whenever an event is an effect of another, it is so in virtue of falling under a physical type. But the thesis does not seem to imply that whenever an event vis an effect of another, it is so only in virtue of falling under a physical type. Upc - comprehensiveness seems consistent with events being effects in virtue of falling under non - physical types. Similarly, the thesis seems consistent with events being causes in virtue of falling under non - physical types.
 Nevertheless, an explanation is called for how events could be causes in virtue of falling under non - physical types if upc - comprehensiveness is true. The most common strategy for offering such an explanation involves maintaining there is a dependence - determination relationship between non - physical types and physical types. Upc - comprehensiveness, together with the claim that instances of non - physical event types are causes or effects, implies that, as a matter of causal necessity, whenever an event falls under a non - physical event type, if falls under some physical type or other. The instantiation of non - physical types by an event thus depends, as a matter of causal necessity, on the instantiation of some or other physical event type by the event. It is held that non - physical types in physical context: Although as given non - physical type might be ‘realizable’ by more than one physical type. The occurrence o a physical type in a physical context in some sense determines the occurrence of any non - physical type that it ‘realizes’.
 Recall the considerations that inspired the ‘no gap’ arguments from physics: Quantum mechanics seems able, in principle, to explain how chemical processes unfold in terms of the mechanics of subatomic particles: Molecular biology seems able, in principle, to explain how the physiological operations of systems in living things occur in terms of biochemical pathways, long chains of chemical reactions. Types of subatomic causal processes ‘implement’ types of chemical processes. Many in the cognitive science community hold that computational processes implement that mental processes, and that computational processes are implemented, in turn, by neurophysiological processes.
 The Oxford English Dictionary gives the everyday meaning of ‘cognition’ as ‘the action or faculty of knowing’. The philosophical meaning is the same, but with the qualification that it is to be ‘taken in its widest sense, including sensation, perception, conception, and volition’. Given the historical link between psychology and philosophy, it is not surprising that ‘cognitive’ in ‘cognitive psychology’ has something like this broader sense, than the everyday one. Nevertheless, the semantics of ‘cognitive psychology’, like that of many adjective - noun combinations, is not entirely transparent. Cognitive psychology is a branch of psychology, and its subject matter approximates to the psychological study that are largely historical, its scope is not exactly what one would predict.
 Many cognitive psychologists have little interest in philosophical issues, as cognitive scientists are, in general, more receptive. Fodor, because of his early involvement in sentence processing research, is taken seriously by many psycholinguistics. His modularity thesis is directly relevant to questions about the interplay of different types of knowledge in language understanding. His innateness hypothesis, however, is generally regarded as unhelpful, and his prescription that cognitive psychology is primarily ignored. Dennett’s recent work on consciousness treats a topic that is highly controversial, but his detailed discussion of psychological research findings has enhanced his credibility among psychologists. Overall, psychologists are happy to get on with their work without philosophers telling them about their ‘mistakes’.
 The hypotheses driving most of modern cognitive science is simple to state - the mind is a computer. What are the consequences for the philosophy of mind? This question acquires heightened interest and complexity from new forms of computation employed in recent cognitive theory.
 Cognitive science has traditionally been based on or upon symbolic computation systems: Systems of rules for manipulating structures built up of tokens of different symbol types. (This classical kind of computation is a direct outgrowth of mathematical logic.) Since the mid - 1980s, however, cognitive theory has increasingly employed connectionist computation: The spread of numerical activation across units - the view that one of the most impressive and plausible ways of modelling cognitive processes in by means of a connectionist, or parallel distributed processing computer architecture. In such a system data is input into a number of cells as one level, or hidden units, which in turn delivers an output.
 Such a system can be ‘trained’ by adjusting the weights a hidden unit accords to each signal from an earlier cell. The’ training’ is accomplished by ‘back propagation of error’, meaning that if the output is incorrect the network makers the minimum adjustment necessary to correct it. Such systems prove capable of producing differentiated responses of great subtly. For example, a system may be able to task as input written English, and deliver as output phonetically accurate speech. Proponents of the approach also, point pout that networks have a certain resemblance to the layers of cells that make up a human brain, and that like us. But unlike conventional computing programs, networks degrade gracefully, in the sense that with local damage they go blurry rather than crashed altogether. Controversy has concerned the extent to which the differentiated responses made by networks deserve to be called recognitions, and the extent to which non - recognizable cognitive function, including linguistic and computational ones, are well approached in these terms.
 Some terminology will prove useful: that is, for which we are to stipulate that an event type ‘T’ is a casual type if and only if there is, at least one type T*, such that something can case a T* in virtue of being a ‘T’. And by saying that an event type is realizable by physical event types or physical properties. For that of which is least causally possible for the event to be realized by a physical event type. Given that non - physical causal types must be realizable by physical types, and given that mental types are non - physical types, there are two ways that mental types might to be causal. First, mental types may fail to be realizable by physical types. Second, mental types might be realizable by physical types but fail to meet some further condition for being causal types. Reasons of both sorts can be found in the literature on mental causation for denting that any mental types are causal. However, there has been much attention paid to reasons for the first sort in this casse of phenomenal mental types (pain states, visual states, and so forth). And there has been much attention to reasons of the second sort in the case of intentional mental states (i.e., beliefs that P, desires that Q, intentions that R, and so on).
 Notice that intentional states figure in explanations of intentional actions not in virtue of their intentional mode (whether they are beliefs or desires, and so on) but also in virtue of their contents, i.e., what is believed, or desired, and so forth. For example, what causally explains someone’s doing ‘A’ (standing on his head) is that the person wants to ‘X’ (impress someone) and believes that by doing ‘A’ he will ‘X’. The contents of the belief and desire (what is believed and what is desired) sem essential to the causal explanation of the agent’s doing ‘A’. Similarly, we often causally explain why someone came to believe that ‘P’ by citing the fact that the individual came to believe that ‘Q’ and inferred ‘P’ from ‘Q’. In such cases, the contents of the states in question are essential to the explanation. This is not, of course, to say that contents themselves are causally efficacious, contents are not among the relata of causal relations. The point is, however, that we characterize states when giving such explanations not only as being as having intentional modes, but also as having certain contents: We type states for having certain contents, we type states for the purpose of such explanations in terms of their intentional modes and their contents. We might call intentional state types that might include content properties ‘conceptual intentional state types’, but to avoid prolixity, let us call them ‘intentional state types’ for short: Thus, for present purposes, b y ‘intentional state types’ we will mean types such as the belief that ‘P; the desire that ‘Q’, and so on, and not types such as belief, desire and the like, and not types such as belief, desire, and so forth.
 Although it was no part of American philosopher Hilary Putnam, who in 1981 marked a departure from scientific realism in favour of a subtle position that he called internal realism, initially related to an ideal limit theory of truth and apparently maintaining affinities with verification, but in subsequent work more closely aligned with ‘minimalism’, Putnam’s concepts in the later period has largely to deny any serious asymmetry between truth and knowledge as it is obtained in natural science, and as it is obtained in morals and even theology. Still, purposively of raising concerns about whether ideational states are causal, the well - known ‘twin earth’ thought experiment have prompted such concerns. These thought - experiments are fairly widely held to show alike in every intrinsic physical respect can have intentional states with different contents. If they show that, then intentional state type fail to supervene on intrinsic physical state types. The reason is that with contents an individual’s beliefs, desires, and the like, have, depends, in part, on extrinsic, contextual factors. Given that, the concern has been raised toast states cannot have effects in virtue of falling under intentional state types.
 One concern seems to be that state cannot have effects in virtue of falling under intentional state types because individuals who are in all and only the same intrinsic states must have all and only the same causal powers. In response to that concern, it might be pointed out that causal power ss often depend on context. Consider weight. The weight of objects do not supervene on their intrinsic properties: Two objects can be exactly alike in every intrinsic respect (and thus have the same mass) yet have different weights. Weight depends, in part on extrinsic, contextual factors. Nonetheless, it seems true that an object can make a scale read 10lbs in virtue of weighing 10lbs. Thus, objects which are in exactly the am e type of intrinsic states may have different causal powers due to differences in their circumstances.
 It should be noted, however, that on some leading ‘externalist’ theories of content, content, unlike weight, depends on a historical context, such as a certain set of content - involving states is for attribution of those states to make the subject as rationally intelligible as possible, in the circumstances. Call such as theory of content ‘historical - externalist theories’. On one leading historical - externalist theory, the content of a state depends on the learning history of the individual on another. It depends on the selection history of the species of which the individual is a member. Historical - externalist theories prompt a concern that states cannot have causal effects in virtue of falling under intentional state types. Causal state types, it might be claimed, are never such that their tokens must have a certain causal ancestry. But, if so, then, if the right account of content is a historical - externalist account, then intentional types are not casual types. Some historical - externalists appear to concede this line of argument, and thus to effects in virtue of falling under intentional state types. However, explain how intentional - externalists attempt to explain how intentional types can be casual, even though their tokens must have appropriated causal ancestries. This issue is hotly debated, and remains unresolved.
 Finally, by noting, why it is controversial, whether phenomenal state types can be realized by physical state types. Phenomenal state types are such that it is like something for a subject to be in them: It is, for instance, like something to have a throbbing pain. It has been argued that phenomenal state types are, for that reason, subjective to fully understand what it is to be in them. One must be able to take up is to be in them, one must be able to take up a certain experiential point of view. For, it is claimed, an essential aspect of what it is to be in a phenomenal state is what it is like to be in a phenomenal state is what it is like to be in the state, only by tasking up certain experiential point of view can one understand that aspect (in our strict and narrow sense) are paradigms’ objective state, i.e., non - subjective states. The issue arises, then, as to whether phenomenal state types can be realized by physicalate types. How could an objective state realize a subjective one? This issue too is hotly debated, and remains unresolved.  Suffice it to say, that only physical types and types realizable by physical types and types realizable by physical types are causal, and if phenomenal types are neither, then nothing can have any causal effects, so, then, in virtue of falling under a phenomenal type. Thus, it could never be the case, for example, that a state causally results in a bad mood in virtue of being a throbbing pain.
 Philosophical theories are unlike scientific ones, scientific theories ask questions in circumstances where there are agreed - upon methods for answering the question and where the answers themselves are generally agreed upon. Philosophical theory: They attempt to model the known data to be seen from a new perspectives, a perspective that promotes the development of genuine scientific theory. Philosophical theories are, thus, proto - theories, as such, they are useful precisely in areas where no large - scale scientific theory exist. At present, which is exactly the state psychology it is in. Philosophy of mind, is to be a kind of propaedeutics to a psychological science. What is clear is that at the moment no universally accepted paradigm for a scientific psychological science exists. It is exactly in this kind of circumstance for a scientific psychology exists. It is exactly in this kind of circumstance that the philosophers of mind in the present context is to consider the empirical data available and to ry to form a generalized, coherent way of looking at those data tat will guide further empirical research, i.e., philosophers can provide a highly schematized model that will structure that research. And the resulting research will, in turn, help bring about refinements of the schematized theory, with the ultimate hope being that a closer, viable, scientific theory, one wherein investigators agree on the question and on the methods to be used to answer them, and will emerge. In these respects, philosophical theories of mind, though concerned with current empirical data, are too general in respect of the data to be scientific theories. Moreover, philosophical theories aimed primarily at a body of accepted data. As such,  philosophical theories merely give as ‘picture’ of those data. Scientific theories not only have to deal with the given data but also have to make predictions, in that can be gleaned from the theory together with accepted data. This removal go unknown data is what forms the empirical basis of a scientific theory and allows it to be justified in a way quite distinct from the way in which philosophical theories are justified. Philosophical theories are only schemata, coherent pictus of the accepted data, only pointers toward empirical theory, and as the history of philosophy makers manifest, usually unsuccessful one - though I think this lack of success is any kind of a fault, these are different tasks.
 In the philosophy of science, a generalization or set of generalizations purportedly making reference to unobservable entities, e.g., atoms, genes, quarks, unconscious wishes, and so forth. The ideal gas law, for example, refers only to such observables as pressure, temperature and volume and their properties. Although an older usage suggests lack of adequate evidence in support thereof (‘merely a theory’), current philosophical usage does not carry that connotation. Einstein’s special theory of relativity, for example, is considered extremely well founded.
 There are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view a theory is a collection of models.
 The axiomatization or axiomatics belongs of a theory that usually emerges as a body of (supposed) truths that are not neatly organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory: One tries to select from among the supposed truths a small number from which all the others can be seen to be deductively inferrable. This make the theory rather more tractable since, in a sense, all the truths are contained in those few. In a theory so organised, the few truths from which all others are deductively inferred are called ‘axioms’. David Hilbert had argued that, just as algebraic and differential equations and physical precesses, could themselves be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, could be made objects of mathematical investigation.
 Wherein, a credibility programme of a speech given in 1900, the mathematician David Hilbert (1862 - 1943) identified 23 outstanding problems in mathematics. The first was the ‘continuum hypothesis’. The second was the problem of the consistency of mathematics. This evolved into a programme of formalizing mathematic - reasoning, with the aim of giving meta - mathematical proofs of its consistency. (Clearly there is no hope of providing a relative consistency proof of classical mathematics, by giving a ‘model’ in some other domain. Any domain large and complex enough to provide a model would be raising the same doubts.) The programme was effectively ended by Kurt Gödel (1906 - 78), whose theorem of 1931, which showed that any system of arithmetic would need to make logical and mathematical assumptions at least as strong as arithmetic itself, and hence be just as much prey to hidden inconsistencies.
 In the tradition (as in Leibniz, 1704), many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they were taken to be entities of such a nature that what exist is ‘caused’ by them. When the principles were taken as epistemically prior, that is, as axioms, either they were taken to be epistemically privileged, e.g., self - evident, not needing to be demonstrated, or again, inclusive ‘or’, to be such that all truths do in need follow from them, in at least, by deductive inferences. Gödel (1984) showed - in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects - that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized that more precisely, any class of axioms which is such that we could effectively decide, of that class, would be too small to capture all of the truths.
 ‘Philosophy is to be replaced by the logic of science - that is to say, by the logical analysis of the concepts and sentences of the sciences, for the logic of science is nothing other than the logical syntax of the language of science’, has a very specific meaning. The background was provided by Hilbert’s reduction of mathematics to purposes of philosophical analysis, any scientific theory could ideally be reconstructed as an axiomatic system formulated within the framework of Russell’ s logic. Further analysis of a particular theory could then proceed a the logical investigation of its ideal logical reconstruction. Claims about theories in general were couched as claims about such logical systems.
 In both Hilbert’s geometry and Russell’s logic had an attempt made to distinguish between logical and non - logical terms. Thus the symbol ‘&’ might be used to indicate the logical relationship of conjunction between two statements, while ‘P’ is supposed to stand for a non - logical predicate. As in the case of geometry, the idea was that underlying any scientific theory is a purely formal logical structure captured in a set of axioms formulated in the appropriated formal language. A theory of geometry, for example, might include an axiom stating that for ant two distinct P’s (points), ‘p’ and ‘q’, there exist a number ‘L’ (Line) such that O(p, I) and O(q, I), where ‘O’ is a two place relationship between P’s and L’s (p lies on I). Such axioms, taken all together, were said to provide an implicit definition of the meaning of the non - logical predicates. In whatever of all the  P’s and L’s might be, they must satisfy the formal relationships given by the axioms.
 The logical empiricists were not primarily logicians: They were empiricists first. From an empiricist point of view, it is not enough that the non - logical terms of a theory be implicitly defined: They also require an empirical interpretation. This was provided by the ‘correspondence rules’ which explicitly linked some of the non - logical terms of a theory with terms whose meaning was presumed to be given directly through ‘experience’ or ‘observation’. The simplest sort of correspondence rule would be one that takes the application of an observationally meaningful term, such as ‘dissolve’, as being both necessary and sufficient for the applicability of a theoretical term, such as ‘soluble’. Such a correspondence rule would provide a complete empirical interpretation of the theoretical term.
 A definitive formulation of the classical view was provided by the German logical positivist Rudolf Carnap (1891 - 1970), who divided the non - logical vocabulary of theories into theoretical and observational components. The observational terms were presumed to be given a complete empirical interpretation, which left the theoretical terms with only an indirect empirical interpretation provided by their implicit definition within an axiom system in which some of the terms possessed a complete empirical interpretation.
 Among the issues generated by Carnap’s formulation was the viability of ‘the theory - observation distinction’, of course, one could always arbitrarily designate some subset of non - logical terms as belonging to the observational vocabulary, but that would compromise the relevance of the philological analysis for an understanding of the original scientific theory. But what could be the philosophical basis for drawing the distinction? Take the predicate ‘spherical’, for example. Anyone can observe that a billiard ball is spherical. But what about the moon, on the one hand, or an invisible speck of sand, on the other. Is the application of the term? For which the ’spherical’ in these objects are ‘observational’?
 Another problem was more formal, as did, that Craig’s theorem seem to show that a theory reconstructed in the recommendations fashioned could be re - axiomatized in such a way as to dispense with all theoretical terms, while retaining all logical consequences involving only observational terms. Craig’s theorem continues as a theorem in mathematical logic, held to have implications in the philosophy of science. The logician William Craig at Berkeley showed how, if we partition the vocabulary of a formal system (say, into the ‘T’ or theoretical terms, and the ‘O’ or observational terms) then if there is a fully ‘formalized system’ ‘T’ with some set ‘S’ of consequences containing only ‘O’ terms, there is also a system ‘O’ containing only the ‘O’ vocabulary but strong enough to give the same set ‘S’ of consequences. The theorem is a purely formal one, in that ‘T’ and ‘O’ simply separate formulae into the preferred ones, containing as non - logical terms only one kind of vocabulary, and the objects. The theorem might encourage the thought that the theoretical terms of a scientific theory are in principle dispensable, since the same consequences can be derived without them.
 However, Craig’s actual procedure gives no effective way of dispensing with theoretical terms in advance, i.e., in the actual process of thinking about and designing the premises from which the set ‘S’ follows. In this sense ‘O’ remains parasitic upon its parent ‘T’. Thus, as far as the ‘empirical’ content of a theory is concerned, it seems that we can do without the theoretical terms. Carnap’s version of the classical vew seemed to imply a form of instrumentionalism, a problem which Carl Gustav Hempel (1905 - 97) christened ‘the theoretician’s dilemma’.
 In the late 1940s, the Dutch philosopher and logician Evert Beth published an alternative formalism for the philosophical analysis of scientific theories. He drew inspiration from the work of Alfred Tarski, who studied first biology and then mathematics. In logic he studied with Kotarinski, Lukasiewicz and Lesniewski publishing a succession of papers from 1923 onwards. Yet he worked on decidable and undecidable axiomatic systems, and in the course in his mathematical career he published over 300 papers and books, on topics ranging from set theory to geometry and algebra. And also, drew further inspiration from Rudolf Carnap, the  German logical positivist who  left Vienna to become a professor at Prague in 1931, and felt Nazism to become professor  in Chicago in 1935. He subsequently worked at Los Angeles from 1952 to 1951. All the same, Evert Beth drew inspirations from von Neumann’s work on the foundations of quantum mechanics. Twenty years later, Beth’s emigrant who left Holland around the time Beth’s and van Fraassen. Here we are consider the comprehensibility in following the explication for which as preconditions between the ‘syntactic’ approach of the classical view and the ‘semantic’ approach of Beth and van Fraassen, and further consider the following simple geometrical theory as van Fraassen in 1989, presented first in the form of:
  A1: For any two lines, at most one point lies on both.
  A2: For any two points, exactly one line lies on both.
  A3: On every line are at least two points.
Note first, that these axioms are stated in more or less everyday language. On the classical view one would have first to reconstruct these axioms in some appropriate formal language, thus introducing quantifiers and other logical symbols. And one would have to attach appropriate correspondence rules. Contrary to common connotations of the word ‘semantic’, the semantic approach down - plays concerns with language as such. Any language will do, so long as it is clear enough to make reliable discriminations between the objects which satisfy the axiom and those which do not. The concern is not so much with what can be deduced from their axioms, valid deduction being  matter of syntax alone. Rather, the focus is on ‘satisfaction’, what satisfies the axioms - a semantic notion. These objects are, in the technical, logical sense of the term, models of the axioms. So, on the semantic approach, the focus shifts from the axiom as linguistic entities, to the models, which are non - linguistic entities.
 It is not enough to be in possession of a general interpretation for the terms used to characterize the models, one must also be able to identify particular instances - for example, a particular nail in a particular board. In real science must effort and sophisticated equipment may be required to make the required identification, for example, of a star as a white dwarf or of a formation in the ocean floor as a transformed fault. On a semantic approach, these complex processes of interpretation and identification, while essential in being able t use a theory, have no place within the theory itself. This is inn sharp contrast to the classical view, which has the very awkward consequence that various innovations in instrumenting itself. The semantic approach better captures the scientist’s own understanding of the difference between theory and instrumentation.
 On the classical view the question ‘What is a scientific theory’‘? Receives a straightforward answer. A theory is (1) a set of uninterrupted axioms in a specific formal language plus (2) a set of correspondence rules that provide a partial empirical interpretation in terms of observable entities and processes. A theory is thus true if and only if the interpreted axioms are all true. To obtain a similarly straightforward answer a little differently. Return to the axiom for placements as considered as free - standing statements. The definition could be formulated as follows: Any set of points and lines constitute a seven - pointed geometry is not even a candidate for truth or falsity, one can hardly identify a theory with a definition. But claims to the effect that various things satisfy the definition may be true or false of the world. Call these claims theoretical hypotheses. So we may say that, on the semantic approach, a theory consists of (1) a theoretical definition plus (2) a number of theoretical hypotheses. The theory may be said to be true just in case all its associated theoretical hypotheses are true.
 Adopting a semantic approach to theories still leaves wide latitude in the choice of specific techniques for formulating particular scientific theories. Following Beth, van Fraassen adopts a ‘state space’ representation which closely mirrors techniques developed in  theoretical physics during the nineteenth century - techniques were carried over into the developments of quantum and relativistic mechanics. The technique can be illustrated most simply for classical mechanics.
 Consider a simple harmonic oscillator, which consists of a mass constrained to move in one dimension subject to a linear restoring force - a weight bouncing gently while from a spring provides a rough example of such a system. Let ‘x’ represent the single spatial dimension, ‘t’ the time., ‘p’ the momentum, ‘k’ the strength of the restoring force, ands ‘m’ the mass. Then a linear harmonic oscillator may be ‘defined’ as a system which satisfies the following differential equation of motion:
 dx/dt = DH/Dp. Dp/dt =  - DH/Dx, where H = (k/2)x2 + (1/2m)p2
The Hamiltonian, ‘H’, represents the sun of the kinetic and potential energy of the system. The state of the system at any instant of time is a point in a two - dimensional position - momentum space. The history of any such system is this state space is given by an ellipse, as in time, the system repeatedly traces by revealing the ellipse onto the ‘x’ axis  covering classical mechanics. It remains to be any real world system, such as a bouncing spring, satisfies this definition.
 Other advocates of a semantic approach defer from the Beth - van Fraassen point of view in the type of formalism they would employ in reconstructing actual scientific theories. One influential approach derives from the word of Pattrick Suppes during he 1950s and 1960s, some of which inspired Suppes was by the logician J.C.C. Mckinsey and Alfred Tarski. In its original form. Suppes’s view was that theoretical definition should be formulated in the language of set theory. Suppes’s approach, as developed by his student Joseph Sneed (1971), has been adopted widely in Europe, and particularly in Germany, by the late Wolfgang Stegmüller (1976) and his students. Frederick Suppe has shares features of both the state space and the set - theoretical approaches.
 Most of those who have developed ‘semantic’ alternatives to the classical ‘syntactic’ approach to the nature of scientific theories were inspired by the goal of reconstructing scientific theories - a goal shared by advocates of the classical view. Many philosophers of science now question whether there is any point in treating philosophical reconstructions as scientific theories. Rather, insofar as the philosophy of science focuses on theories at all, it is the scientific versions, in their own terms, that should be of primary concern. But many now argue that the major concern should be directed toward the whole practice of science, in which theories are but a part. In these latter pursuits what is needed is not a technical framework for reconstructing scientific theories, but merely a general imperative framework for talking about required theories and their various roles in the practice of science. This becomes especially important when considering science such as biology, in which mathematical models play less of a role than in physics.
 At this point, at which there are strong reasons for adopting a generalized model - based understanding of scientific theories which makes no commitments to any particular formalism - for example, state spaces or set - theoretical predicates. In fact, one can even drop the distinction between ‘syntactic’ and ‘semantic’ as a leftover from an old debate. The important distinction is between an account of theories that takes models as fundamental versus that takes statements, particularly laws, as fundamental. A major argument for a model - based approach is that just given. There seem in fact to be few, if any, universal statements that might even plausibly be true, let alone known to be true, and thus available to play the role which laws have been thought to play in the classical account of theories, rather, what have often been taken to be universal generalisations should be interpreted as parts of definitions. Again, it may be helpful to introduce explicitly the notion of an idealized, theoretical model, an abstract entity which answers s precisely to the correspondence theoretical definition. Theoretical models thus provide, though only by fiat, something of which theoretical definitions may be true. This makes it possible to interpret much of scientific’ theoretical discourse as being about theoretical models than directly about the world. What have traditionally been interpreted as laws of nature thus out to be merely statements describing the behaviour of theoretical models?
 If one adopts such a generalized model - based understanding of scientific theories, one must characterize the relationship between theoretical models and real systems. Van Fraassen (1980) suggests that it should be one of isomorphism. But the same considerations that count against there being true laws in the classical sense also count against there being anything in the real world strictly isomorphic in any theoretical model, or even isomorphic to an ‘empirical’ sub - model. What is needed is a weaker notion of similarity, for which it must be specified both in which respect the theoretical model and the real system are similar, and to what degree. These specifications, however, like the interpretation of terms used in characterizing the model and the identification of relevant aspects of real systems, are not part of the model itself. They are part of a complex practice in which models are constructed and tested against the world in an attempt to determine how well they ‘fit’.
 Divorced from its formal background, a model - based understanding of theories is easily incorporated into a general framework of naturalism in the philosophy of science. It is particularly well - suited to a cognitive approach to science. Today the idea of a cognitive approach to the study of science means something quite different - indeed, something antithetical to the earlier meaning. A ‘cognitive approach’ is now taken to be one that focuses on the cognitive structures and processes exhibited in the activities of individual scenists. The general nature of these structures and processes is the subject matter of the newly emerging cognitive science. A cognitive approach to the study of science appeals to specific features of such structures and processes to explain the model and choices of individual scientists. It is assumed that to explain the overall progress of science one must ultimately also appeal to social factors and social approaches, but not one in which the cognitive excludes the social. Both are required for an adequate understanding of science as the product of human activities.
 What is excluded by the newer cognitive approach to the study of science is any appeal to a special definition of rationality which would make rationality a categorical or transcendent feature of science. Of course, scientists have goals, both individual and collective, and they employ more or less effective means for achieving these goals. So one may invoke an ‘instrumental’ or ‘hypothetical’ notion of rationality in explaining the success or failure of various scientific enterprise. But what is it at issue is just the effectiveness of various goal - directed activities, not rationality in any more exalted sense which could provide a demarcation criterion distinguishing science from other activities, sch as business or warfare. What distinguishes science is its particular goals and methods, not any special form of rationality. A cognitive approach to the study of science, then, is a species of naturalism in the philosophy of science.
 Naturalism in the philosophy of science, and philosophy generally, is more an overall approach to the subject than a set of specific doctrines. In philosophy it may be characterized only by the most general ontological and epistemological principles, and then more by what it opposes than by what it proposes.
 Besides ontological naturalisms and epistemological type naturalism, it seems that its most probably the single most important contributor to naturalism in the past century was Charles Robert Darwin (1809 - 82), who, while not a philosopher, naturalist is both in the philosophical and the biological sense of the term. In ‘The Descent of Man’ (1871) Darwin made clear the implications of natural selection for humans, including both their biology and psychology, thus undercutting forms of anti - naturalism which appealed not only to extra - natural vital forces in biology, but to human freedom, values, morality, and so forth. These supposed indicators of the extra - natural are all, for Darwin, merely products of natural selection.
 All and all, among advocates of a cognitive approach there is near unanimity in rejecting the logical positivist leal of scientific knowledge as being represented in the form of an interpreted, axiomatic system. But there the unanimity ends. Many employ a ‘mental models’ approach derived from the work of Johnson - Laird (1983). Others favour ‘production rules’ if this, infer that, a long usage for which the continuance by researchers in computer science and artificial intelligence, while some appeal to neural network representations.
 The logical positivist are notorious for having restricted the philosophical study of science to the ‘context of justification’, thus relegating questions of discovery and conceptual change to empirical psychology. A cognitive approach to the study of science naturally embraces these issues as of central concern. Again, there are differences. The pioneering treatment, inspired by the work of Herbert Simon, who employed techniques from computer science and artificial intelligence to generate scientific laws from finite data. These methods have now been generalized in various directions, while appeals to study of analogical reasoning in cognitive psychology, while Gooding (1990) develops a cognitive model of experimental procedure. Both Nersessian and Gooding combine cognitive with historical methods, yielding what Neressian calls a ‘cognitive - historical’ approach. Most advocates of a cognitive approach to conceptual change are insistent  that a proper cognitive understanding of conceptual change avoids the problem of incommensurability between old and new theories.
 No one employing a cognitive approach to the study of science thinks that there could be an inductive logic which would pick out the uniquely rational choice among rival hypotheses. But some, such as Thagard (1991) think it possible to construct an algorithm that could be run on a computer which would show which of two theories is best. Others seek to model such judgements as decisions by individual scientists, whose various personal, professional, and social interests are necessarily reflected in the decision process. Here, it is important to see how experimental design and the result of experiments may influence individual decisions as to which theory best represents the real world.
 The major differences in approach among those who share a general cognitive approach to the study of science reflect differences in cognitive science itself. At present, ‘cognitive science’ is not a unified field of study, but an amalgam of parts of several previously existing fields, especially artificial intelligence, cognitive psychology, and cognitive neuroscience. Linguistic, anthropology, and philosophy also contribute. Which particular approach a person takes has typically been determined more by developing a cognitive approach may depend on looking past specific disciplinary differences and focussing on those cognitive aspects of science where the need for further understanding is greatest.
 Broadly, the problem of scientific change is to give an account of how scientific theories, proposition, concepts, and/or activities alter over the corpuses of times generations. Must such changes be accepted as brute products of guesses, blind conjectures, and genius? Or are there rules according to which at least some new ideas are introduced and ultimately accepted or rejected? Would such rules be codifiable into coherent systems, a theory of ‘the scientific method’? Are they more like rules of thumb, subject to exceptions whose character may not be specifiable, not necessarily leading to desired results? Do these supposed rules themselves change over time? If so, do they change in the light of the same factors as more substantive scientific beliefs, or independently of such factors? Does science ‘progress’? And if so, is its goal the attainment of truth, or a simple or coherent account (true or not) of experience, or something else?
 Controversy exists about what a theory of scientific change should be a theory of the change ‘of’. Philosophers long assumed that the fundamental objects of study of study are the acceptance or rejection of individual belief or propositions, change of concepts, positions, and theories being derivative from that. More recently, some have maintained that the fundamental units of change are theories or larger coherent bodies of scientific belief, or concepts or problems. Again, the kinds of causal factors which an adequate theory of scientific change should consider are far from evident. Among the various factors said to be relevant are observational data: The accepted background of theory, higher - level methodological constraints, psychological, sociological, religious, meta - physical, or aesthetic factors influencing decisions made by scientists about what to accept and what to do.
 These issues affect the very delineation of the field of philosophy of science, in  what ways, if any, does it, in its search for a theory of scientific change, differ from and rely on other areas, particularly the history and sociology of science? One traditional view was that those others are not relevant at all, at least in any fundamental way. Even if they are, exactly how do they relate to the interest peculiar to the philosophy of science? In defining their subject many philosophers have distinguished maltsters internal to scientific development - ones relevant to the discovery and/or justification  of scientific claims - from ones external thereto - psychological, sociological, religious, metaphysical, and so forth, not directly relevant but frequently having a causal influence. A line of demarcation is thus drawn between science and non - science, and simultaneously between philosophy of science, concerned with the internal factors which function as reasons (or count as reasoning), and other disciplines, to which the external, nonrational factors are relegated.
 This array of issues is closely related to that of whether a proper theory of scientific change is normative or descriptive. Is philosophy of science confined in description of what scientific cases be described with complete accuracy as it is descriptive, to what extent must scientific cases be described with compete accuracy? Can the theory of internal factors be a ‘rational reconstruction’ a retelling that partially distorts what actually happened in order to bring out the essential reasoning involved?
 Or should a theory of scientific change be normative, prescribing how science ought to proceed? Should it counsel scientists about how to improve their procedures? Or would it be presumptuous of philosophers to advise them about how to do what they would it be presumptuous of philosophers to advise them about how to do what they are far better prepared to do? Most advocates of normative philosophy of science agree that their theories are accountable somehow to the actual conduct of science. Perhaps philosophy should clarify what is done in the best science: But can what qualifies as ‘best science’ be specified without bias? Feyerabend objects to taking certain developments as paradigmatic of good science. With others, he accepts the  ‘Pessimistic induction’ according to which, since all past theories have proved incorrect, present ones can be expected to do so also, what we consider good science, eve n the methodological rules we rely on, may be rejected in the future.
 Much discussion of scientific change since Hanson centres on the distinction between context of discovery and justification. The distinction is usually ascribed to the philosopher of science and probability theorist Hans Reichenbach (1891 - 1953) and, as generally interpreted, reflective attitude of the logical empiricist movement and of the philosopher of science Raimund Karl Popper (1902 - 1994) who overturns the traditional attempts to found scientific method in the support that experience gives in suitably formed generalizations and theories. Stressing the difficulty, the problem of ‘induction’ put in front of any such method. Popper substitutes an epistemology that starts with the hold,, imaginative formation of hypotheses. These face the tribunal of experience, which has the power to falsify, but not to confirm them. A hypotheses that survives the ordeal of attempted refutation between science and metaphysics, that an unambiguously refuted law statement may enjoy a high degree of this kind of ‘confirmation’, where can be provisionally accepted as ‘corroborated’, but never assigned a probability.
 The promise of a ‘logic’ of discovery, in the sense of a set of algorithmic, content - neutral rules of reasoning distinct from justification, remains unfulfilled. Upholding the distinction between discovery and justification, but claiming nonetheless that discovery is philosophically relevant, many recent writers propose that discovery is a matter of a ‘methodology’, ‘rationale’, or ‘heuristic;’ rather than a ‘logic’. That is, only a loose body of strategies or rules of thumb - still formulable discoveries, there is content of scientific belief - which one has some reason to hope will lead to the discovery of a hypothesis.
 In the enthusiasm over the problem of scientific change in the 1960s nd 1970s, the most influential theories were based on holistic viewpoints within which scientific ‘traditions’ or ‘communities’ allegedly worked. The American philosopher of science Samuel Thomas Kuhn (1922 - 96) suggested that the defining characteristic of a scientific tradition is its ‘commitment’ to a shared ‘paradigm’. A paradigm is ‘the source of the methods, problem - field, and standards of solution accepted by any mature scientific community at any given time’. Normal science e, the working out of the paradigm, gives way to scientific revolution when ‘anomalies’ in it precipitate a crisis leading to adoptions of a new paradigms. Besides many studies contending that Kuhn’s model fails for some particular historical case, three major criticisms of Kuhn’s view are as follows. First, ambiguities exist in his notion of a paradigm. Thus a paradigm includes a cluster of components, including ‘conceptual, theoretical, instrumental, and methodological’ communities: It involves more than is capturable in a single theory, or even in words. Second, how can a paradigm fall, since it determine s what count as facts, problems, and anomalies? Third, since what counts as a ‘reason’ is paradigm - dependent, there remains no trans - paradigmatic reason for accepting a new paradigm upon the failure of an older one.
 Such radical relativism is exacerbated by the ‘incommensurability’ thesis shared by Kuhn (1962) and Feyerabend (1975), are, even so, that, Feyerabend’s differences with Kuhn can be reduced to two basic ones. The first is that Feyerabend’s variety of incommensurability is more global and cannot be localized in the vicinity of a single problematic term or even a cluster of terms. That is, Feyerabend holds that fundamental changes of theory lead to changes in the meaning of all the terms in a particular theory. The other significant difference concerns the reasons for incommensurability. Whereas Kuhn thinks that incommensurability stems from specific translational difficulties involving problematic terms. Feyerabend’s variety of incommensurability seems to result from a kin d of extreme holism about the nature of meaning itself. Feyerabend is more consistent than Kuhn in giving a linguistic characterization of incommensurability, and there seems to be more continuity in his usage over time. He generally frames the incommensurability claim in term’s of language, but the precis e reasons he cites for incommensurability are different from Kuhn’s. One of Feyerabend‘s most detailed attempts to illustrate the concept of incommensurability involves the medieval European impetus theory and Newtonian classical mechanics. He claims that ‘the concept of impetus, as fixed by the usage established in the impetus theory, cannot be defined in a reasonable way within Newton’s theory’.
 Yet, on several occasions Feyerabend explains the reasons for incommensurability by saying that there are certain ‘universal rules’ or ‘principles of construction’ which govern the terms of one theory and which are violated by the other theory. Since the second theory violates such rules, any attempt to state the claims of that theory in terms of the first will be rendered futile. ‘We have a point of view (theory, framework, cosmos, modes of representation) whose elements (concepts, facts, picture) are built up in accordance e with certain principles of construction. The principle s involve e something ;like a ‘closure’, there are things that cannot be said, or ‘discovered’, without violating the principles (which does not mean contradicting them). Stating such terms as ‘universal’ he states: ‘Let us call a discovery, or a statement, or an attitude incommensurable with the cosmos (the theory, the framework) if it suspends some of its universal principles’.  As an example, of this phenomena, consider two theories, ‘T’ and T*, where ‘T’ is classical celestial mechanics, including the space - time framework, and ‘T’ is general relativity theory. Such principles as the absence of an upper  limit for velocity, governing all the terms in celestial mechanics, and these terms cannot be expressed at once such principles are violated, as they will be by general relativity theory. Even so, the meaning of terms is paradigm - dependent, so that a paradigm tradition is ‘not only incompatible but often actually incommensurable with that which has gone before’. Different paradigms cannot even be compared, for both standards of comparison and meaning are paradigm - dependent.
 Response to incommensurability have been profuse in the philosophy of science, and only a small fractions can be sampled at this point, however, two main trends may be distinguished. The first denies some aspects of the claim, and suggests a method of forging a linguistic comparison among theories, while the second, though not necessarily accepting the claim of linguistic incommensurability proceeds to develop other ways of comparing scientific theories.
 Inn the first camp are those who have argued that at least one component of meaning is unaffected by untranslatability: Namely, reference, Israel Scheller (1982) enunciates this influential idea in responses to incommensurability, but he does not supply a theory of reference to demonstrate how the reference of terms from different theories can be compared. Later writers seem to be aware of the need for a full - blown theory of reference to make this response successful. Hilary Putnam (1975) argues that the causal theory of reference can be used to give an account of the meaning of natural kind terms, and suggests that the same can be done for scientific terms in general, but the causal theory was first proposed as a theory of reference for proper names, and there are serious problems with the attempt to apply it to science. An entirely different language response to the incommensurability claim is found in the American philosopher Herbert Donald Davidson (1917 - 2003), where the construction takes place within a generally ‘holistic’ theory of knowledge and meaning. A radial interpreter can tell when a subject holds a sentence term and using the principle of ‘charity’ ends up making an assignment of truth conditions to individual sentences, although Davidson is a defender of the doctrine of the ‘indeterminacy’ of radical translation and the in reusability ‘ of reference, his approach has seemed to many to offer some hope of identifying meaning as an extensional approach to language. Davidson is also known for rejection of the idea of a conceptual scheme, thought of s something peculiar to one language or one way of looking at the world.
 The second kind of response to incommensurability proceeds to look or non - linguistic ways of making a comparison between scientific theories. Among these responses one can distinguish two main approaches. One approach advocates expressing theories in model - theoretic terms, thus espousing a mathematical mode of comparisons. This position has been advocated by writers such as Joseph Sneed and Wolfgang Stegmüller, who have shown how to discern certain structural similarities among theories in mathematical physics. But the methods of this ‘structural approach‘ do not seem applicable t any but the most highly mathematized scientific theories. Moreover, some advocate of this approach have claimed that it lends support to a model - theoretic analogue of Kuhn’s incommensurability claim. Another trend which has scientific theories to be entities in the minds or brains of scientists, and regard them as amendable to the techniques of recent cognitive science, proponents include Paul Churchlands, Ronald Gierre, and Paul Thagard. Thagard’s (1992) s perhaps the most sustained cognitive attempt to rely to incommensurability. He uses techniques derived from the connectionist research program in artificial intelligence, but relies crucially from a linguistic mode of representing scientific theories without articulating the theory of meaning presupposed. Interestingly, neither cognitivist who urges acing connectionist methods to represent scientific theories. Churchlands (1992), argues that connectionist models vindicate Feyerabend’s version of incommensurability.
 The issue of incommensurability remains a live one. It does not arise just for a logical empiricist account of scientific theories, but for any account that allows for the linguistic representation of theories. Discussions of linguistic meaning cannot be banished from the philosophical analysis of science, simply because language figures prominently in the daily work of science itself, and its place is not about to be taken over by any other representational medium. Therefore, the challenge facing anyone who holds that the scientific enterprise sometimes requires us to mk e a point - by - point linguistic comparison of rival theories is to respond to the specific semantic problem raised by Kuhn and Feyerabend. However, if one does not think that such a piecemeal comparison of theories is necessary, then the challenge is tp articulate another way of putting scientific theories in the balance and weighing them against one - another.
 The state of science at any given time is characterized, in part at least, by the theories that are ‘accepted’ at that time. Presently, accepted theories include quantum theory, the general theory of relativity, and the modern synthesis of Darwin and Mendel, as well as lower level (but still clearly theoretical) assertions such as that DNA has a double helical structure, that the hydrogen atom contains a single electron and so firth. What precisely involves the accepting of a theory?
 The commonsense answer might appear to be that given by the scientific realist, to accept a theory means, at root, to believe it to be true for at any rate, ‘approximately’ or ‘essentially’ true. Not surprising, the state of theoretical science at any time is in fact too complex to be captured fully by any such single notion.
 For one thing, theories are often firmly accepted while being explicitly recognized to be idealizations. The use of idealizations raises as number of problems for the philosopher of science. One such problem is that of confirmation. On the deductive nomological model of scientific theories, which command virtually universal assent in the eighteenth and nineteenth centuries, is that confirming evidence for a hypothesis of evidence which increases its probability. Nonetheless, presumably, if it could be shown that any such hypothesis is sufficiently well confirmed by the evidence, then that would be grounds for accepting it. If, then, it could be shown that observational evidence could confirm such transcendent hypotheses at all, then that would go some way to solving the problem of induction. Nevertheless, thinkers as diverse in their outlook as Edmund Husserl and Albert Einstein have pointed to idealizations as the hall - mark of modern science.
 Once, again, theories may be accepted, not be regarded as idealizations, and yet be known not to be strictly true  -  for scientific, rather than abstruse philosophical, reasons. For example, quantum theory and relativity theory were uncontroversially listed as among those presently accepted in science. Yet, it is known that the two theories, yet relativity requires all theories are not quantized, yet quantum theory say that fundamentally everything is. It is acknowledged that what is needed is a synthesis of the two theories, a synthesis which cannot of course (in view of their logical incommutability) leave both theories, as presently understood, fully intact, (This synthesis is supposed to be supplied by quantum field theory, but it is not yet known how to articulate that theory fully) none of this means, that the present quantum and relativistic theories regarded as having an authentically conjectural character. Instead, the attitude seems to be that they are bound to survive in modified form as limited cases in the unifying theory of the future   -  this is why a synthesis is consciously sought.
 In addition, there are theories that are regarded as actively conjectured while nonetheless being accepted in some sense: It is implicitly allowed that these theories might not live on as approximations or limiting cases in further sciences, though they are certainly the best accounts we presently have of their related range of phenomena. This used to be (perhaps still is) the general view of the theory of quarks, few would put these on a par with electrons, say,, but all regard them as more than simply interesting possibilities.
 Finally, the phenomenon of change in accepted theory during the development of science must be taken into account: But from the beginning, the distance between idealization and the actual practice of science was evident. Karl Raimund Popper (1902 - 1994), the philosopher of science, was to note, that an element of decision is required in determining what constitute a ‘good’ observation, a question of this sort, which leads to an examination of the relationship between observation and theory, has prompted philosophers of science to raise a series of more specific questions. What reasoning was in fact used to make inferences about light waves, which cannot be observed from diffraction patterns that can be? Was such reasoning legitimate? Are they to be construed as postulating entities just as real as water waves only much smaller? Or should the wave theory be understood non realistically as an instrumental device for organizing the predicting observable optical phenomena such ass the reflection, refraction, and diffraction of light? Such questions presuppose that here is a clear distinction between what can and cannot be observed. Is such a distinction clear? If so, how is it to be drawn? As, these issues are among the central ones raised by philosophers of science about theory that postulates unobservable entities
 Reasoning begins in the ‘context of justification’, as this is accomplished by deriving conclusions deductively from the assumptions of the theory. Among these   conclusions at least some will describe states of affairs capable of being establish ed as true or false by observations. If these observational conclusions turns out to be true, the theory is shown to be empirically supported or probable. On a weaker version due to Karl Popper (1959), the theory is said to be ‘corroborated’, meaning simply that it has been subjected to test and has not been falsified. Should any of the observational conclusions turn out to be false, the theory is refuted, and must be modified or replaced. So a hypothetico - deductivist can postulate any unobservable entities or events he or she wishes in the theory, so long as all the observational conclusions of the theory are true.
 However, against the then generally accepted view that the empirical science are distinguished by their use of an inductive method. Popper’s 1934 book had tackled  two main problems: That of demarcating science from non - science (including pseudo - science and metaphysics), and the problem of induction. Again, Popper proposed a falsifications criterion of demarcation: Science advances unverifiable theories and tries to falsify them by deducing predictive consequences and by putting the more improbable of these to searching experimental tests. Surviving such testing provided no inductive support for the theory, which remain a conjecture, and may be overthrown subsequently. Popper’s answer to the Scottish philosopher, historian and essayist David Hume (1711 - 76), was that he was quite right about the invalidity of inductive inference, but that this does not matter, because these play no role in science, in that the problem of induction drops out.
 Then, is a scientific hypothesis to be tested against protocol statements, such that the basic statements in the logical positivist analysis of knowledge, thought as reporting the unvanishing and pre - theoretical deliverance of experience: What it is like here, now, for me. The central controversy concerned whether it was legitimate to couch them in terms of public objects and their qualities or whether a less theoretical committing, purely phenomenal content could be found. The former option makes it hards to regard then as truly basic, whereas the latter option ,makes it difficult to see how they can be incorporated into objectives science. The controversy is often thought to have been closed in favour of a public version by the ‘private language’ argument. Difficulties at this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the ‘coherence theory’ of truth’, it is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given’.
 Popper advocated a strictly non - psychological reading of the empirical basis of science. He required ‘basic’ statements to report events that are ‘observable’ only in that they involve relative position and movement of macroscopic physical bodies in certain space - time regions, and which are relatively easy to tests. Perceptual experience was denied an epistemological role (though allowed a causal one),: Basic statements are accepted as a result of a convention or agreement between scientific observers. Should such an agreement break down, the disputed basic statements would need to be tested against further statements that are still more ‘basic’ and even easier to test.
 But there is an easy general result as well: Assuming that a theory is any deductively closed set of sentences as assuming, with the empiricist, that the language in which these sentences are expressed has two sorts of predates (observational and theoretical) and, finally, assuming that the entailment of the evidence is the only constraint on empirical adequacy, then there are always indefinitely many different theories which are equally empirically adequate as any given theory. Take a theory as the deductive closure of some set of sentences in a language in which the two sets of predicates are differentiated: Consider the restriction of ‘T’ to quantifier - free sentences expressed purely in the observational vocabulary, then any conservative extension of that restricted set of T’s consequences back into the full vocabulary is a ‘theory’ co - empirically adequate with  -  entailing the same singular observational statement as  -  ‘T’. Unless very special conditions apply (conditions which do not apply to any real scientific theory), then some of these empirically equivalent theories will formally contradict ‘T’. (A similarly straightforward demonstration works for the currently a  fashionable account of theories as set of models.)
 Many of the problems concerning scientific change have been clarified, and many new answers suggested. Nevertheless,, concepts central to it (like ‘paradigm’, ‘core’, ‘problem’, constraint’, ‘verisimilitude’) still remain formulated in highly general, even programmatic ways. Many devastating criticisms of the doctrine based of them have not been answered satisfactory.
 Problems centrally important for the analysis of scientific change have been neglected, there are, for instance, lingering echoes of logical empiricism in claims that the methods and goals of science are unchanging, and thus are independent of scientific change itself, or that if they do change, they do so for reasons independent of those involved in substantive scientific change itself. By their very nature, such approaches fail to address the change that actually occur in science. For example, even supposing that science ultimately seeks the general and unalterable goal of ‘truth’ or ‘verisimilitude’, that injunction itself gives guidance ass to what scenists should seek or others should go about seeking it. More specific goals do provide guidance, and, as the transition from technological mechanistic to gauge - theoretic goals illustrate, those goals are often altered in light of discoveries about what is achieved, or about what kinds of theories are promising. A theory of scientific change should account for these kinds of goal changes, and for how, once accepted, they alter the rest of the patterns of scientific reasoning and change, including ways in which mor general goals and methods may be reconceived.
 Traditionally, philosophy has concerned itself with relations between propositions which are specifically relevant to one another in form or content. So viewed, philosophical explanation of scientific change should appeal to factors which are clearly more scientifically relevant in their content to the specific direction of new scientific research and conclusions than are social factors whose overt relevance lies elsewhere. However, in recent years many writers, especially in the ‘strong programme’ in the sociology of science have maintained that all purported ‘rational’ practices must be assimilated to social influences.
 Such claims are excessive. Despite allegations that even what is counted as evidence is a matter of mere negotiated agreement, many consider that the last word has not been said on the idea tat there is in some deeply important sense a ‘given’, inn experience in terms of which we can, at least partially, judge theories. Again, studies continue to document the role of reasonably accepted prior beliefs (‘background information’) which can help guide those and other judgements. Even if we can no longer naively affirm the sufficiency of ‘internal’ givens and background scientific information to account for what science should and can be, and certainly for what it is often in human practice, neither should we take the criticisms of it or granted, accepting that scientific change is explainable only by appeal to external factors.
 Equally, we cannot accept too readily the assumption (another logical empiricist legacy) that our task is to explain science and its evolution by appeal to meta - scientific rules or goals, or metaphysical principles, arrived at in the light of purely philosophical analysis, and altered (if at all) by factors independent of substantive science. For such trans - scientific analysis, even while claiming to explain ‘what science is’, do so in terms ‘external’ to the processes bty which science actually changes.
 Externalist claims are premature: Not enough is yet understood about the roles of indisputable scientific consecrations in shaping scientific change, including changes of method and goals. Even if we ultimately cannot accept the traditional ‘internalist’ approach in philosophy of science, as philosophers concerned with the form and content of reasoning we must determine accurately how far it can be carried. For that task, historical and contemporary case studies are necessary but insufficient: Too often the positive implications of such studies are left unclear, and their too hasty assumption is often that whatever lessons are generated therefrom apply equally to later science. Larger lessons need to be a systematic account integrating the revealed patterns of scientific reasoning and the ways they are altered into a coherent interpretation of the knowledge  - seeking enterprise  -  a theory of scientific change. Whether such efforts are successful or not, it only nr=e through attempting to give sch a coherent account in scientific terms , or through understanding our failure ton do so, that it will be possible to assess precisely the extent to which trans - scientific factors (meta - scientific, social, or otherwise) must be included in accounts of scientific change.
 That for being on one side, it is noticeable that the modifications for which of changes have conversely been revealed as a quality specific or identifying to those of something that makes or sets apart the unstretching obligation for ones approaching the problem. That it has echoed over times generations in making different or become different, to transforming substitution for or among its own time of change. Finding in the resulting grains of residue that history has amazed a gradual change of attitudinal values for which times changes in 1925, where the old quantum mechanics of Planck, Einstein, and Bohr was replaced by the new (matrix) quantum mechanics of Born, Heisenberg, Jordan, and Dirac. In 1926 Schrödinger developed wave mechanics, which proved to be equivalent to matrix mechanics in the sense that they ked to the same energy levels. Dirac and Jordan joined the two theories into pone transformation quantum theory. In 1932 von Neumann presented his Hilbert space formations of quantum mechanics and proved a representation theorem showing that sequences in transformation theory were isomorphic notions of theory identity are involved, as theory individuation of theoretical equivalence and empirical equivalences.
 What determines whether theories T1 and T2, are instances of the same theory or distinct theories? By construing scientific theories as partially interpreted syntactical axiom system TC, positivism made specific of the axiomatization individuating factures of the theory. Thus, different choices of axioms T or alternations in the correspondence rules - say, to accommodate a new measurement procedure - resulting in a new scientific meaning of the theorized descriptive terms τ. Thus, significant alternations in the axiomatization would result not only in a new theory T’C’ but one with changed meaning τ’. Kuhn and Feyerabend maintained that the resulting change could make TC and T’C’ non-comparable, or ‘incommensurable’. Attempts to explore individuation issues for theories through the medium of meanings change or incommensurability proved unsuccessful and have been largely abandoned.
 Individuation of theories in actual scientific practice is at odds with the positivistic analyses. For example, difference equation, differential equations, and Hamiltonian versions of classical mechanics, are all formulations of one theory, though they differ in how fully they characterize classical mechanics. It follows that syntactical specifics of theory formulation cannot be undeviating features, which is to say that scientific theories are not linguistic entities. Rather, theories must be some sort of extra-linguistic structure which can be referred to through th medium of alterative and even in equivalent formulations (as with classical mechanics). Also, the various experimental designs, and so forth, incorporated into positivistic correspondence rules cannot be individuating features of theories. For improved instrumentation or experimental technique  does not automatically produce a new theory. Accommodating these individuation features was a main motivation for the semantic conception of theories where theories are state spaces or other extra-linguistic structures standing in mapping relations to phenomena.
 Scientific theories undergo developments, are refined, and change. Both syntactic and semantic analysis of theories concentrate on theories at mature stages of development, and it is an open question either approach adequately individuates theories undergoing active development.
 Under what circumstances are two theories equivalent? On syntactical approaches, axiomatizations T1 and T2 having a common definitional extension would be sufficient Robinson’s theorem which says that T1 and T2 must have a model in common t be compatible. They will be equivalent if theory have precisely the same (or equivalent) sets of models. On the semantic conception the theories will be two distinct sets of structures (models) M1 and M2. The theories will be equivalent just in case we can prove a representation theorem showing that M1 and M2 are isomorphic (structurally equivalent). In this way von Neumann showed that transformation quantum theory and the Hilbert Space formulation were equivalent.
 `The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily reducing all previous philosophical attempts to articulate the ‘will to truth’. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche’s earlier versions to the ‘will to truth’, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will’.
 In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. Based on the assumption that there is no really necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in ‘a prison house of language’. The prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.
 Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favors reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.
 Nietzsche’s emotionally charged defence of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.
 The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.
 Nietzsche’s emotionally charged defense of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.
 The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.
 The mechanistic paradigm of the late n nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, ‘relativistic’ notions.
 Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1905) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons, in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the unsurmountable achievements, as remain obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle ‘forms’ or ‘types’ in the involving evolutionary principles of the general theory of relativity (1915). Where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics. Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space.
 If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that evinces of the ‘progressive principal order’ of complementary relations its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, it is reasonable to conclude, in philosophical terms at least, that the universe is conscious.
 But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.
 Uncertain issues surrounding certainty are especially connected with those concerning ‘scepticism’. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truth-realizations become disintegrations of the undefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
 As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undecidable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptics conclude eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.
 Fixed by its will for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not s the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase ‘Cartesian scepticism’ is sometimes used. Descartes himself was not a sceptic, however, in the ‘method of doubt’ uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of ‘clear and distinct’ ideas, not far removed from the phantasiá kataleptikê of the Stoics.
 For many sceptics have traditionally held that knowledge requires certainty, artistry. And, of course, they affirm of having being such beyond doubt that knowledge is not feigned to possibilities. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true. It has often been thought, that any thing known must satisfy certain criteria as well as true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standard in the apparent or justly conclude in accepting it warranted to some degree.
 Besides, there is another view - the absolute globular view that we do not have any knowledge whatsoever. In whatever manner,
It is doubtful that any philosopher seriously entertains of absolute or the completed consummation of scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident are any belief that requires evidences because it is warranted.
 René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It’s challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.
 All the same, Pyrrhonism and Cartesian conduct regulated by an appearance of something as distinguished from which it is made of a nearly global scepticism. Having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic’s mill about. The Pyrrhonist will call to mind that no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no inductive standard about anything other than one’s own mind and its contents is sufficiently warranted, because there are always legitimate grounds for doubting it. Whereby, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.
 A Cartesian requires certainty, but a Pyrrhonist merely requires that the standards in case are more warranted then its negation.
 Cartesian scepticism was unduly an in fluence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.
 Because the Pyrrhonist requires much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.
 The underlying latencies that are given among the many derivative contributions as awaiting their presence to the future that of specifying to the theory of knowledge, is, but, nonetheless, the possibility to identify a set of shared doctrines, however, identity to discern two broad styles of instances to discern, in like manners, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not fordone.
 Repudiating the requirements of absolute certainty or knowledge, insisting on the connection of knowledge with activity, as, too, of pragmatism of a reformist distributing knowledge upon the legitimacy of traditional questions about the truth-unconductiveness of our cognitive practices, and sustain a conception of truth objectives, enough to give those questions that undergo of underlying the causalities that their own purposive latencies are yet given to the spoken word for which a dialectic awareness sparks too aflame from the ambers of fire.
 Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of youth, acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.
 It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, ‘S’ are certain, or we can say that its descendable alinement is aligned as of ‘p’, are certain. The two uses can be connected by saying that ‘S’ has the right to be certain just in case the value of ‘p’ is sufficiently verified.
 In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible, either for any proposition at all, or for any proposition at all, or for any proposition from some suspect family (ethics, theory, memory, empirical judgement etc.) a major sceptical weapon is the possibility of upsetting events that Can cast doubts back onto what was hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation.
 However, in moral theory, the views that there are inviolable moral standards or absolute variable human desires or policies or prescriptions.
 In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only given some antecedent desire or project: ‘If you want to look wise, stay quiet’. The injunction to stay quiet only proves applicable to those with the antecedent desire or inclination. If one has no desire to look wise, the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, ‘tell the truth (regardless of whether you want to or not)’. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only to arouse to activity, animation, or life in case of those with the stated desire.
 In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) The formula of universal law: ‘act only on that maxim through which you can at the same times will that it should become universal law: (2) The formula of the law of nature: ‘Act as if the maxim of your action were to become through the ‘willingness’ of a universal law of nature’: (3) The formula of the end-in-itself: ‘act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end’: (4) the formula of autonomy, or considering ‘the will of every rational being as a will which makes universal law’: (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.
 Even so, a proposition that is not a conditional ‘p’. Moreover, the affirmative and negative, modern opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) = if ‘X’ is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
 A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of  a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such and gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that are force field’s pure potential, fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space that differ only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be ‘grounded’ in the properties of the medium.
 The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Although his equal hostility to ‘action at a distance’ muddies the water. It is usually credited to  the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both of whom led of their persuasive influenced, the scientist Faraday, with whose work the physical notion became established. In his paper ‘On the Physical Character of the Lines of Magnetic Force’ (1852). Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.
 Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a ‘utility’ of accepting it. Communicated, so much as a dispiriting position for which its place of valuation may be viewed as an objection. Since there are things that are false, as it may be useful to accept, and conversely there are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant’s doctrine, and continued to play an influencing role in the theory of meaning and truth.
 James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individuated insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.
 From his earliest writings, James understood cognitive processes in teleological terms. Thought, he held, assists us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief’s benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.
 Such an approach, however, sets’ James’ theory of meaning apart from verification, dismissive of metaphysics. Unlike the verificationalist, who takes cognitive meaning to be a matter only of consequences in sensory experience. James’ took pragmatic meaning to include emotional and matter responses. Moreover, his, metaphysical standard of value, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments’ James did not hold that even his broad sets of consequences were exhaustive of a term meaning. ‘Theism’, for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.
 James’ theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.
 However, Peirce’s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We except an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant ti the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.
 To a greater extent, and most important, is the famed  apprehension of the pragmatic principle, in so that, Pierces’s account of reality: When we take something to be rea that by this single case, we think it is ‘fated to be agreed upon by all who investigate’ the matter to which it stand, in other words, if I believe that it is really the case that ‘P’, then I except that if anyone were to inquire into the finding measure into whether ‘p’, they would arrive at the belief that ‘p’. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that ‘would-bees’ are objective and, of course, real.
 If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that the entitles posited by the relevant discourse that exists or at least exists: The standard example is ‘idealism’, that a reality id somehow mind-curative or mind-co-ordinated - that real objects comprising the ‘external worlds’ are dependently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of ‘idealism’ enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of some formative constellations and not of any mere understanding of the nature of the ‘real’ bit even the resulting charger we attributed to it.
 Wherefore, the term is most straightforwardly used when qualifying another linguistic form of Grammatik: a real ‘x’ may be contrasted with a fake, a failed ‘x’, a near ‘x’, and so on. To trat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the ‘unreal’ as a separate domain of things, perhaps, unfairly to that of the benefits of existence.
 Such that non-existence of all things, as the product of logical confusion of treating the term ‘nothing’ as itself a referring expression instead of a ‘quantifier’. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as ‘Nothing is all around us’ talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate ‘is all around us’ have appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of Nothing, is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between ‘existentialist’ and ‘analytic philosophy’, on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.
 A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.
 Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the ‘intuitivistic’ critique of classical mathematics, and suggested that the unrestricted use of the ‘principle of bivalence’ is the trademark of ‘realism’. However, this ha to overcome counter-examples both ways: Although Aquinas wads a moral ‘realist’, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the laws of bivalence happily in mathematics, precisely because it had only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects really exist and independent of us and our mental stares) with transcendental idealism (the phenomenal world as a whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox opposition to realism has been from philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.
 Assigned to the modern treatment of existence in the theory of ‘quantification’ is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify itself and an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for its crated by sentences like ‘This exists’, where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. ‘This exists’ is.  Therefore, unlike ‘Tamed tigers exist’, where a property is said to have an instance, for the word ‘this’ and does not locate a property, but only an individual.
 Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in th distribution of exemplification of properties.
 The philosophical ponderosity over which to set upon the unreal, as belonging to the domain of Being. Nonetheless, there is little for us that can be said with the philosopher’s study. So it is not apparent that there can be such a subject for being by itself. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of ‘why is there something and not of nothing’? Prompting over logical reflection on what it is for a universal to have an instance, nd as long history of attempts to explain contingent existence, by which id to reference and a necessary ground.
 In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the Well, Good or God, but whose relation with the everyday world remains to a finely grained residue of obscurity. The celebrated argument for the existence of God was first propounded by Anselm in his Proslogin. The argument by defining God as ‘something than which nothing greater can be conceived’. God then exists in the understanding since we understand this concept. However, if He only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. Bu then, we can conceive of something greater than that than which nothing greater can be conceived, which is contradictory. Therefore, God cannot exist on the understanding, but exists in reality.
 An influential argument (or family of arguments) for the existence of God, finding its premisses are that all natural things are dependent for their existence on something else. The totality of dependents brings much more then itself,  depending on or upon a non-dependent, or necessarily existent bring about in that which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.
 Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other tings of a similar kind exists, the question merely arises again. So, that ‘God’ that serves the ‘Kingdom of Ends’ deems to question must that essentially in lasting through all time existing of necessity, in that of having to occur of states or facts as having an independent reality: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.
 The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of id quo maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute pre-supposition of certain forms of thought.
 In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinga. One version is to define something as unsurpassably great, if it exists and is perfect in every ‘possible world’. Then, to allow that it is at least possible that an unsurpassable great being existing. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly finding to the necessity held to ‘p’, we can device its necessity as ‘p’. A symmetrical proof starting from the assumption that it is possible that such a being not exist would derive that it is impossible that it exists.
 The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of omnifarious knowledge the same result occurs. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, ‘Doing nothing’ can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context, may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about  result, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.
 The double effect of a principle attempting to define when an action that had both good and bad results are morally permissible. I one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequence is not that which is intended (3) the good is not itself a result of the bad consequences, and (4) two consequential effects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two tings (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself does not perish (pricking is a loss of form).
 And is, therefore, in some sense available to rescind of a new body, therefore, it is not I who remain indefinitely in existence or in a particular state or course of abiding to any-kind of body death, same personalized body that becomes reanimated by the same form, that which Aquinas’s account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficultly at this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable ‘myth of the given
 The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical ‘behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism,  Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as attested by its successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given a extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligible in that their world of nature and of thought become identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man, accommodated with freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel’s method is at its most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.

No comments:

Post a Comment