\documentclass{report}
\usepackage[T1]{fontenc}
%\usepackage[a4paper]{geometry}
\usepackage[leqno]{amsmath}
\usepackage{amssymb}
\usepackage{theorem}
\usepackage[dvips]{graphicx}
\usepackage{makeidx}
\renewcommand{\th}{$^{\text{th}}$}
\newcommand{\authorfont}[1]{\textsc{#1}}
\renewcommand{\author}[2]{%
\def\tst{#1}
\ifx\tst\empty
\authorfont{#2}\index{#2}
\else
\authorfont{#2}\index{#1}
\fi}
\theorembodyfont{\normalfont}
\theoremheaderfont{\normalfont}
\newtheorem{axiom}{Ax.}
\newtheorem{theorem}{Th.}
\makeindex
\begin{document}
\small
\begin{verbatim}
The Project Gutenberg EBook of The Algebra of Logic, by Louis Couturat
This eBook is for the use of anyone anywhere at no cost and with
almost no restrictions whatsoever. You may copy it, give it away or
re-use it under the terms of the Project Gutenberg License included
with this eBook or online at www.gutenberg.org
Title: The Algebra of Logic
Author: Louis Couturat
Release Date: January 26, 2004 [EBook #10836]
Language: English
Character set encoding: TeX
*** START OF THIS PROJECT GUTENBERG EBOOK THE ALGEBRA OF LOGIC ***
Produced by David Starner, Arno Peters, Susan Skinner
and the Online Distributed Proofreading Team.
\end{verbatim}
\normalsize
\newpage
\pagenumbering{roman}
\begin{titlepage}
\begin{center}
{\LARGE\bfseries THE ALGEBRA OF LOGIC}\\[2.5cm]
BY\\[2.5cm]
{\Large LOUIS COUTURAT}\\[3cm]
AUTHORIZED ENGLISH TRANSLATION\\[2cm]
BY\\[2cm]
{\large LYDIA GILLINGHAM ROBINSON, B. A.}\\[1cm]
\textsc{With a Preface by PHILIP E. B. JOURDAIN. M. A. (Cantab.)}
\end{center}
\end{titlepage}
\section*{Preface}
\addcontentsline{toc}{section}{\numberline{}Preface}
Mathematical Logic is a necessary preliminary to logical
Mathematics. ``Mathematical Logic'' is the name given by
\author{}{Peano} to what is also known (after \author{}{Venn}) as ``Symbolic
Logic''; and Symbolic Logic is, in essentials, the Logic of
Aristotle,\index{Aristotle} given new life and power by being
dressed up in the wonderful---almost magical---armour and
accoutrements of Algebra. In less than seventy years, logic, to
use an expression of \author{}{De Morgan's,} has so \emph{thriven}
upon symbols and, in consequence, so grown and altered that the
ancient logicians would not recognize it, and many old-fashioned
logicians will not recognize it. The metaphor is not quite
correct: Logic has neither grown nor altered, but we now see more
\emph{of} it and more \emph{into} it.
The primary significance of a symbolic calculus seems to lie in
the economy of mental effort which it brings about, and to this is
due the characteristic power and rapid development of mathematical
knowledge. Attempts to treat the operations of formal logic in an
analogous way had been made not infrequently by some of the more
philosophical mathematicians, such as \author{}{Leibniz} and
\author{}{Lambert}; but their labors remained little known, and it
was \author{}{Boole} and \author{}{De Morgan,} about the middle of
the nineteenth century, to whom a mathematical---though of course
non-quantitative---way of regarding logic was due. By this, not
only was the traditional or Aristotelian doctrine of logic
reformed and completed, but out of it has developed, in course of
time, an instrument which deals in a sure manner with the task of
investigating the fundamental concepts of mathematics---a task
which philosophers have repeatedly taken in hand, and in which
they have as repeatedly failed.
First of all, it is necessary to glance at the growth of
symbolism in mathematics; where alone it first reached perfection.
There have been three stages in the development
of mathematical doctrines: first came propositions with particular
numbers, like the one expressed, with signs subsequently
invented, by ``$2 + 3 = 5$''; then came more general laws holding
for all numbers and expressed by letters, such as
\begin{displaymath}
\mbox{``}(a + b) c = a c + b c\mbox{''};
\end{displaymath}
lastly came the knowledge of more general laws of functions and
the formation of the conception and expression ``function''. The
origin of the symbols for particular whole numbers is very
ancient, while the symbols now in use for the operations and
relations of arithmetic mostly date from the sixteenth and
seventeenth centuries; and these ``constant'' symbols together
with the letters first used systematically by \author{}{Viète}
(1540--1603) and \author{}{Descartes} (1596--1650), serve, by
themselves, to express many propositions. It is not, then,
surprising that \author{}{Descartes,} who was both a mathematician
and a philosopher, should have had the idea of keeping the method
of algebra while going beyond the material of traditional
mathematics and embracing the general science of what thought
finds, so that philosophy should become a kind of Universal
Mathematics. This sort of generalization of the use of symbols for
analogous theories is a characteristic of mathematics, and seems
to be a reason lying deeper than the erroneous idea, arising from
a simple confusion of thought, that algebraical symbols
necessarily imply something quantitative, for the antagonism there
used to be and is on the part of those logicians who were not and
are not mathematicians, to symbolic logic. This idea of a
universal mathematics was cultivated especially by
\author{}{Gottfried Wilhelm Leibniz} (1646--1716).
Though modern logic is really due to \author{}{Boole} and
\author{}{De Morgan,} \author{}{Leibniz} was the first to have a
really distinct plan of a system of mathematical logic. That this
is so appears from research---much of which is quite recent---into
\author{}{Leibniz's} unpublished work.
The principles of the logic of \author{}{Leibniz,} and
consequently of his whole philosophy, reduce to
two\footnote{\author{}{Couturat,} \emph{La Logique de Leibniz
d'après des documents inédits}, Paris, 1901, pp.~431--432, 48.}:
(1) All our ideas are compounded of a very small number of simple
ideas which
form the ``alphabet of human thoughts'';%
\index{Alphabet of human thought} (2) Complex ideas
proceed from these simple ideas by a uniform and symmetrical
combination which is analogous to arithmetical multiplication.
With regard to the first principle, the number of simple ideas is
much greater than \author{}{Leibniz} thought; and, with regard to the second
principle, logic considers three operations---which we shall meet
with in the following book under the names of logical multiplication,
logical addition%
\index{Addition!and multiplication!Logical} and negation---instead of only one.
``Characters''\index{Characters} were, with \author{}{Leibniz,}
any written signs, and ``real'' characters were those which---as
in the Chinese ideography---represent ideas directly, and not the
words for them. Among real characters, some simply serve to
represent ideas, and some serve for reasoning. Egyptian and
Chinese hieroglyphics and the symbols of astronomers and chemists
belong to the first category, but
\author{}{Leibniz} declared them to be imperfect, and desired the second
category of characters for what he called his ``universal
characteristic''.\footnote{\emph{Ibid}., p.~81.} It was not in the form of
an algebra that \author{}{Leibniz} first conceived his characteristic,
probably because he was then a novice in mathematics, but in the form of a
universal language or script.\footnote{\emph{Ibid}., pp.~51, 78} It was in
1676 that he first dreamed of a kind of algebra of thought,%
\index{Algebra!of thought}\footnote{\emph{Ibid}., p.~61.} and it was the
algebraic notation which then served as model for the
characteristic.\footnote{\emph{Ibid}., p.~83.}
\author{}{Leibniz} attached so much importance to the invention of proper
symbols that he attributed to this alone the whole of his
discoveries in mathematics.\footnote{\emph{Ibid}., p.~84.} And, in
fact, his infinitesimal calculus\index{Calculus!Infinitesimal}
affords a most brilliant example of the importance of, and
\author{}{Leibniz'}s skill in devising, a suitable
notation.\footnote{\emph{Ibid}., p.~84--87.}
Now, it must be remembered that what is usually understood by the name
``symbolic logic'', and which---though not its name---is chiefly due to
\author{}{Boole,} is what \author{}{Leibniz} called a \emph{Calculus
ratiocinator},\index{Calculus!ratiocinator@\emph{ratiocinator}} and is
only a part of the Universal Characteristic. In symbolic logic
\author{}{Leibniz} enunciated the principal
properties of what we now call logical multiplication, addition,%
\index{Addition!and multiplication!Logical} negation, identity,
class-inclusion, and the null-class; but the aim of
\author{}{Leibniz's} researches was, as he said, to create ``a
kind of general system of notation in which all the truths of
reason should be reduced to a calculus. This could be, at the same
time, a kind of universal written language, very different from
all those which have been projected hitherto; for the characters
and even the words would direct the reason, and the
errors---excepting those of fact---would only be errors of
calculation. It would be very difficult to invent this language or
characteristic, but very easy to learn it without any
dictionaries''. He fixed the time necessary to form it: ``I think
that some chosen men could finish the matter within five years'';
and finally remarked: ``And so I repeat, what I have often said,
that a man who is neither a prophet nor a prince can never
undertake any thing more conducive to the good of the human race
and the glory of God''.
In his last letters he remarked: ``If I had been less busy,
or if I were younger or helped by well-intentioned young
people, I would have hoped to have evolved a characteristic
of this kind''; and: ``I have spoken of my general characteristic
to the Marquis de l'Hôpital and others; but they paid no
more attention than if I had been telling them a dream. It
would be necessary to support it by some obvious use; but,
for this purpose, it would be necessary to construct a part
at least of my characteristic;---and this is not easy, above all
to one situated as I am''.
\author{}{Leibniz} thus formed projects of both what he called a
\emph{characteristica universalis}, and what he called a \emph{calculus
ratiocinator};\index{Calculus!ratiocinator@\emph{ratiocinator}} it is not
hard to see that these projects are interconnected, since a
perfect universal characteristic would comprise, it seems, a
logical calculus.\index{Calculus!Logical} \author{}{Leibniz} did
not publish the incomplete results which he had obtained, and
consequently his ideas had no continuators, with the exception of
\author{}{Lambert} and some others, up to the time when
\author{}{Boole,} \author{}{De Morgan,}
\author{}{Schröder,} \author{}{MacColl,} and others rediscovered his
theorems. But when the investigations of the principles of
mathematics became the chief task of logical symbolism, the aspect
of symbolic logic as a calculus ceased to be of such importance,
as we see in the work of \author{}{Frege} and \author{}{Russell.}
\author{}{Frege's} symbolism, though far better for logical analysis than
\author{}{Boole's} or the more modern \author{}{Peano's,} for instance, is far
inferior to \author{}{Peano's}---a symbolism in which the merits
of internationality and power of expressing mathematical theorems
are very satisfactorily attained---in practical convenience.
\author{}{Russell,} especially in his later works, has used the ideas of
\author{}{Frege,} many of which he discovered subsequently to, but
independently of, \author{}{Frege,} and modified the symbolism of
\author{}{Peano} as little as possible. Still, the complications
thus introduced take away that simple character which seems
necessary to a calculus, and which \author{}{Boole} and others
reached by passing over certain distinctions which a subtler logic
has shown us must ultimately be made.
Let us dwell a little longer on the distinction pointed out by
\author{}{Leibniz} between a \emph{calculus
ratiocinator}\index{Calculus!ratiocinator@\emph{ratiocinator}} and a
\emph{characteristica universalis} or \emph{lingua characteristica}. The
ambiguities of ordinary language are too well known for it to be necessary
for us to give instances. The objects of a complete logical symbolism are:
firstly, to avoid this disadvantage by providing an \emph{ideography}, in
which the signs represent ideas and the relations between them
\emph{directly} (without the intermediary of words), and secondly, so to
manage that, from given premises, we can, in this ideography, draw all the
logical conclusions which they imply by means of rules of transformation of
formulas analogous to those of algebra,---in fact, in which we can replace
reasoning by the almost mechanical process of calculation. This second
requirement is the requirement of a \emph{calculus
ratiocinator}.\index{Calculus!ratiocinator@\emph{ratiocinator}} It is
essential that the ideography should be complete, that only symbols with a
well-defined meaning should be used---to avoid the same sort of ambiguities
that words have---and, consequently,---that no suppositions should be
introduced implicitly, as is commonly the case if the meaning of signs is
not well defined. Whatever premises are necessary and sufficient%
\index{Condition!Necessary and sufficient} for a conclusion should be
stated explicitly.
Besides this, it is of practical importance,---though it is
theoretically irrelevant,---that the ideography should be concise,
so that it is a sort of stenography.
The merits of such an ideography are obvious: rigor of
reasoning is ensured by the calculus character; we are
sure of not introducing unintentionally any premise; and
we can see exactly on what propositions any demonstration
depends.
We can shortly, but very fairly accurately, characterize the dual
development of the theory of symbolic logic during the last sixty
years as follows: The \emph{calculus
ratiocinator}\index{Calculus!ratiocinator@\emph{ratiocinator}}
aspect of symbolic logic was developed by \author{}{Boole,}
\author{}{De Morgan,} \author{}{Jevons,} \author{}{Venn,}
\author{Peirce, C.~S.}{C. S. Peirce,} \author{}{Schröder,} Mrs.
\author{}{Ladd-Franklin} and others; the \emph{lingua characteristica}
aspect was developed by \author{}{Frege,} \author{}{Peano} and
\author{}{Russell.} Of course there is no hard and fast boundary-line
between the domains of these two parties. Thus \author{Peirce, C.~S.}{Peirce} and
\author{}{Schröder} early began to work at the foundations of
arithmetic with the help of the calculus of relations; and thus they
did not consider the logical calculus\index{Calculus!Logical} merely
as an interesting branch of algebra. Then \author{}{Peano} paid
particular attention to the calculative aspect of his symbolism.
\author{}{Frege} has remarked that his own symbolism is meant to be a
\emph{calculus
ratiocinator}\index{Calculus!ratiocinator@\emph{ratiocinator}} as
well as a \emph{lingua characteristica}, but the using of
\author{}{Frege's} symbolism as a calculus would be rather like using
a three-legged stand-camera for what is called ``snap-shot''
photography, and one of the outwardly most noticeable things about
\author{}{Russell's} work is his combination of the symbolisms of
\author{}{Frege} and \author{}{Peano} in such a way as to preserve
nearly all of the merits of each.
The present work is concerned with the \emph{calculus
ratiocinator}\index{Calculus!ratiocinator@\emph{ratiocinator}} aspect,
and shows, in an admirably succinct form, the beauty, symmetry and
simplicity of the calculus of logic regarded as an algebra. In fact, it can
hardly be doubted that some such form as the one in which
\author{}{Schröder} left it is by far the best for exhibiting it from this
point of view.\footnote{Cf.\ \author{Whitehead, A.~N.}{A.~N.\
Whitehead,}
\emph{A Treatise on Universal Algebra with Applications}, Cambridge,
1898.} The content of the
present volume corresponds to the two first volumes of
\author{}{Schröder's} great but rather prolix
treatise.\footnote{\emph{Vorlesungen über die Algebra der Logik},
Vol.~I., Leipsic, 1890; Vol. II, 1891 and 1905. We may mention that
a much shorter \emph{Abriss} of the work has been prepared by
\author{Müller, Eugen}{Eugen Müller.} Vol.~III (1895) of
\author{}{Schröder's} work is on the logic of relatives founded by
\author{}{De Morgan} and \author{Peirce, C.~S.}{C. S. Peirce,}---a
branch of Logic that is only mentioned in the concluding sentences
of this volume.} Principally owing to the influence of
\author{Peirce, C.~S.}{C. S. Peirce,} \author{}{Schröder} departed
from the custom of \author{}{Boole, Jevons,} and himself (1877),
which consisted in the making fundamental of the notion of
\emph{equality}, and adopted the notion of \emph{subordination} or
\emph{inclusion} as a primitive notion. A more orthodox
\textsc{Boolian} exposition is that of
\author{}{Venn,}\footnote{Symbolic Logic, London, 1881; 2nd ed.,
1894.} which also contains many valuable historical notes.
We will finally make two remarks.
When \author{}{Boole} (cf.~\S\ref{ch:2} below) spoke of propositions determining
a class of moments at which they are true, he really
(as did \author{}{MacColl}) used the word ``proposition'' for what we
now call a ``propositional function''. A ``proposition'' is a
thing expressed by such a phrase as ``twice two are four'' or
``twice two are five'', and is always true or always false. But
we might seem to be stating a proposition when we say:
``Mr. \author{Bryan, William Jennings}{William Jennings Bryan} is Candidate for the Presidency
of the United States'', a statement which is sometimes true
and sometimes false. But such a statement is like a mathematical
\emph{function} in so far as it depends on a \emph{variable}---the
time. Functions of this kind are conveniently distinguished
from such entities as that expressed by the phrase ``twice
two are four'' by calling the latter entities ``propositions'' and
the former entities ``propositional functions'': when the variable
in a propositional function is fixed, the function becomes a
proposition. There is, of course, no sort of necessity why
these special names should be used; the use of them is
merely a question of convenience and convention.
In the second place, it must be carefully observed that, in
\S\ref{ch:13}, 0~and~1 are not \emph{defined} by expressions whose
principal copulas are relations of inclusion. A definition is
simply the convention that, for the sake of brevity or some other
convenience, a certain new sign is to be used instead of a group
of signs whose meaning is already known. Thus, it is the sign of
\emph{equality} that forms the principal copula. The theory of
definition has been most minutely studied, in modern times by
\author{}{Frege} and \author{}{Peano.}
\begin{quote}
Philip E. B. Jourdain.
\end{quote}
Girton, Cambridge. England.
\cleardoublepage
{\renewcommand{\footnote}[1]{}
\tableofcontents}
\cleardoublepage
\section*{Bibliography%
\footnote{This list contains only the works relating to the system of \author{}{Boole}
and \author{}{Schröder} explained in this work.}}
\addcontentsline{toc}{section}{\numberline{}Bibliography}
\bibliographystyle{none}
\begin{description}
\item[\textsc{George Boole}.] \emph{The Mathematical Analysis of Logic} (Cambridge
and London, 1847).
\item[---] \emph{An Investigation of the Laws of Thought} (London and
Cambridge, 1854).
\item[\textsc{W. Stanley Jevons}.] \emph{Pure Logic} (London, 1864).
\item[---] ``On the Mechanical Performance of Logical Inference''
(\emph{Philosophical Transactions}, 1870).
\item[\textsc{Ernst Schröder}.] \emph{Der Operationskreis des Logikkalkuls}
(Leipsic, 1877).
\item[---] \emph{Vorlesungen über die Algebra der Logik}, Vol.~I (1890),
Vol.~II (1891), Vol.~III: \emph{Algebra und Logik der Relative}
(1895) (Leipsic).\footnote{\textsc{Eugen Müller} has prepared a part, and is preparing more, of
the publication of supplements to Vols.~II and~III, from the papers left
by \textsc{Schröder}.}
\item[\textsc{Alexander MacFarlane}.] \emph{Principles of the Algebra of Logic,
with Examples} (Edinburgh, 1879).
\item[\textsc{John Venn}.] \emph{Symbolic Logic}, 1881; 2nd. ed., 1894 (London).%
\footnote{A valuable work from the points of view of history and bibliography.}
\emph{Studies in Logic} by members of the Johns Hopkins University
(Boston, 1883): particularly Mrs. \textsc{Ladd-Franklin},
\textsc{O. Mitchell} and \textsc{C. S. Peirce}.
\item[\textsc{A.~N.~Whitehead}.] \emph{A Treatise on Universal Algebra}. Vol. I
(Cambridge, 1898).
\item[---] ``Memoir on the Algebra of Symbolic Logic'' (\emph{American
Journal of Mathematics}, Vol.~XXIII, 1901).
\item[\textsc{Eugen Müller}.] \emph{Über die Algebra der Logik:}
I.~\emph{Die Grundlagen des Ge\-biete\-kalkuls;} II.~\emph{Das
Eliminationsproblem und die Syllogistik;} Programs of the Grand
Ducal Gymnasium of Tauberbischofsheim (Baden), 1900, 1901 (Leipsic).
\item[\textsc{W.~E.~Johnson}.] ``Sur la théorie des
égalités logiques'' (\emph{Bibliothèque du Congrès international de
Philosophie}. Vol.~III, \emph{Logique et Histoire des Sciences;}
Paris, 1901).
\item[\textsc{Platon Poretsky}.] \emph{Sept Lois fondamentales de la théorie des
égalités logiques} (Kazan, 1899).
\item[---] \emph{Quelques lois ultérieures de la théorie des égalités logiques}
(Kazan, 1902).
\item[---] ``Exposé élémentaire de la théorie des égalités logiques à
deux termes'' (\emph{Revue de Métaphysique et de Morale}. Vol. VIII,
1900.)
\item[---] ``Théorie des égalités logiques à trois termes'' (\emph{Bibliothèque
du Congrès international de Philosophie}). Vol. III. (\emph{Logique
et Histoire des Sciences}). (Paris, 1901, pp.~201--233).
\item[---] \emph{Théorie des non-égalités logiques} (Kazan, 1904).
\item[\textsc{E. V. Huntington}.] ``Sets of Independent Postulates for the
Algebra of Logic'' (\emph{Transactions of the American Mathematical
Society}, Vol. V, 1904).
\end{description}
\cleardoublepage
\pagenumbering{arabic}
\part*{THE ALGEBRA OF LOGIC.}
\cleardoublepage
\section{Introduction}\label{ch. 1}
The algebra of logic was founded by
\author{Boole}{George Boole} (1815--1864); it was developed and perfected
by \author{Schröder}{Ernst Schröder} (1841--1902). The fundamental laws
of this calculus were devised to express the principles of
reasoning, the ``laws of thought''. But this calculus may be
considered from the purely formal point of view, which is
that of mathematics, as an algebra based upon certain principles
arbitrarily laid down. It belongs to the realm of
philosophy to decide whether, and in what measure, this
calculus corresponds to the actual operations of the mind,
and is adapted to translate or even to replace argument;
we cannot discuss this point here. The formal value of this
calculus and its interest for the mathematician are absolutely
independent of the interpretation given it and of the application
which can be made of it to logical problems. In
short, we shall discuss it not as logic but as algebra.
\section{The Two Interpretations of the Logical Calculus}
\label{ch:2}\index{Calculus!Logical}
There is one circumstance of particular interest,
namely, that the algebra in question, like logic, is susceptible
of two distinct interpretations, the parallelism between them
being almost perfect, according as the letters represent concepts
or propositions. Doubtless we can, with \author{}{Boole} and
\author{}{Schröder,} reduce the two interpretations to one, by considering
the concepts on the one hand and the propositions
on the other as corresponding to \emph{assemblages} or \emph{classes}; since
a concept determines the class of objects to which it is
applied (and which in logic is called its \emph{extension}), and a
proposition determines the class of the instances or moments
of time in which it is true (and which by analogy can also
be called its extension). Accordingly the calculus of concepts%
\index{Concepts!Calculus of}\index{Calculus!of concepts} and the
calculus of propositions become reduced to
but one, the calculus of classes,%
\index{Classes!Calculus of}\index{Calculus!of classes} or, as
\author{}{Leibniz} called it, the theory of the whole and part, of that
which contains and that which is contained. But as a matter of fact, the
calculus of concepts and the calculus of propositions present certain
differences, as we shall see, which prevent their complete identification
from the formal point of view and consequently their reduction to a single
``calculus of classes''.
Accordingly we have in reality three distinct calculi, or,
in the part common to all, three different interpretations of
the same calculus. In any case the reader must not forget
that the logical value and the deductive sequence of the
formulas does not in the least depend upon the interpretations
which may be given them, and, in order to
make this necessary abstraction easier, we shall take care to
place the symbols ``C.~I.'' (\emph{conceptual interpretation}) and ``P.~I.''
(\emph{prepositional interpretation}) before all interpretative phrases.
These interpretations shall serve only to render the formulas
intelligible, to give them clearness and to make their meaning
at once obvious, but never to justify them. They may
be omitted without destroying the logical rigidity of the
system.
In order not to favor either interpretation we shall say
that the letters represent \emph{terms}; these terms may be either
concepts or propositions according to the case in hand.
Hence we use the word \emph{term} only in the logical sense.
When we wish to designate the ``terms'' of a sum we shall
use the word \emph{summand} in order that the logical and mathematical
meanings of the word may not be confused. A term
may therefore be either a factor or a summand.
\section{Relation of Inclusion}\label{ch:3}
Like all deductive theories, the algebra of logic may be
established on various systems of principles\footnote{See
\author{}{Huntington,} ``Sets of Independent Postulates for the
Algebra of Logic'', \emph{Transactions of the Am.\ Math.\ Soc.},
Vol.~V, 1904, pp.~288--309. [Here he says: ``Any set of consistent
postulates would give rise to a corresponding algebra, viz., the
totality of propositions which follow from these postulates by
logical deductions. Every set of postulates should be free from
redundances, in other words, the postulates of each set should be
\emph{independent}, no one of them deducible from the rest.'']};
we shall choose the one which most nearly approaches the
exposition of \author{}{Schröder} and current logical
interpretation.
The fundamental relation of this calculus is the binary
(two-termed) relation which is called \emph{inclusion} (for
classes), \emph{subsumption} (for concepts), or \emph{implication}
(for propositions). We will adopt the first name as affecting
alike the two logical interpretations, and we will represent this
relation by the sign $<$ because it has formal properties
analogous to those of the mathematical relation $<$ (``less
than'') or more exactly $\leq$, especially the relation of not
being symmetrical. Because of this analogy \author{}{Schröder}
represents this relation by the sign $\in$ which we shall not
employ because it is complex, whereas the relation of inclusion is
a simple one.
In the system of principles which we shall adopt, this
relation is taken as a primitive idea and is consequently
indefinable. The explanations which follow are not given
for the purpose of \emph{defining} it but only to indicate its meaning
according to each of the two interpretations.
C.~I.: When $a$ and $b$ denote concepts, the relation $a < b$
signifies that the concept $a$ is subsumed under the concept $b$;
that is, it is a species with respect to the genus $b$. From
the extensive point of view, it denotes that the class of $a$'s
is contained in the class of $b$'s or makes a part of it; or,
more concisely, that ``All $a$'s are $b$'s''. From the
comprehensive point of view it means that the concept $b$ is contained
in the concept $a$ or makes a part of it, so that consequently
the character $a$ implies or involves the character $b$. Example:
``All men are mortal''; ``Man implies mortal''; ``Who says
man says mortal''; or, simply, ``Man, therefore mortal''.
P.~I.: When~$a$ and $b$~denote propositions, the relation $a < b$
signifies that the proposition~$a$ implies or involves the
proposition $b$, which is often expressed by the hypothetical
judgement, ``If~$a$ is true, $b$~is true''; or by~``$a$
implies~$b$''; or more simply by~``$a$, therefore~$b$''. We see
that in both interpretations the relation $<$ may be translated
approximately by ``therefore''.
\emph{Remark}.---Such a relation as ``$a < b$'' is a proposition,
whatever may be the interpretation of the terms~$a$ and~$b$.
Consequently, whenever a $<$~relation has two like relations
(or even only one) for its members, it can receive only the
propositional interpretation, that is to say, it can only denote
an implication.
A relation whose members are simple terms (letters) is
called a \emph{primary} proposition; a relation whose members are
primary propositions is called a \emph{secondary} proposition, and
so on.
From this it may be seen at once that the propositional
interpretation is more homogeneous than the conceptual,
since it alone makes it possible to give the same meaning
to the copula~$<$ in both primary and secondary propositions.
\section{Definition of Equality}\label{ch:4}
There is a second copula
that may be defined by means of the first; this is the
copular~$=$ (``equal to''). By definition we have
\begin{displaymath}
a = b,
\end{displaymath}
whenever
\begin{displaymath}
a < b \text{ and } b < a
\end{displaymath}
are true at the same time, and then only. In other words,
the single relation $a = b$ is equivalent to the two
simultaneous relations $a < b$ and $b < a$.
In both interpretations the meaning of the copula~$=$ is
determined by its formal definition:
C.~I.: $a = b$ means, ``All~$a$'s are $b$'s~and all~$b$'s are~$a$'s'';
in other words, that the classes~$a$ and~$b$ coincide, that they are
identical.\footnote{This does not mean that the concepts~$a$ and~$b$
have the same meaning. Examples: ``triangle'' and ``trilateral'',
``equiangular triangle'' and ``equilateral triangle''.}
P.~I.: $a = b$ means that~$a$ implies~$b$ and~$b$ implies~$a$; in
other words, that the propositions~$a$ and~$b$ are equivalent,
that is to say, either true or false at the same
time.\footnote{This does not
mean that they have the same meaning. Example: ``The triangle ABC
has two equal sides'', and ``The triangle ABC has two equal
angles''.}
\emph{Remark.}---The relation of equality is symmetrical by very
reason of its definition: $a = b$ is equivalent to $b = a$. But
the relation of inclusion is not symmetrical: $a < b$ is not
equivalent to $b < a$, nor does it imply it. We might agree
to consider the expression $a > b$ equivalent to $b < a$, but
we prefer for the sake of clearness to preserve always the
same sense for the copula~$<$. However, we might translate
verbally the same inclusion $a < b$ sometimes by~``$a$ is contained
in~$b$'', and sometimes by ``$b$ contains $a$''.
In order not to favor either interpretation, we will call the first
member of this relation the \emph{antecedent}\index{Antecedent} and
the second the \emph{consequent}\index{Consequent}.
C.~I.: The antecedent is the \emph{subject} and the consequent is
the \emph{predicate} of a universal affirmative proposition.%
\index{Affirmative propositions}
P.~I.: The antecedent is the \emph{premise} or the
\emph{cause},\index{Cause} and the consequent is the
\emph{consequence}.\index{Consequence} When an implication is translated by
a \emph{hypothetical} (or \emph{conditional}) judgment the antecedent is
called the \emph{hypothesis} (or the \emph{condition}\index{Condition}) and
the consequent is called the \emph{thesis}.
When we shall have to demonstrate an equality we shall
usually analyze it into two converse inclusions and demonstrate
them separately. This analysis is sometimes made also
when the equality is a datum (a \emph{premise}).
When both members of the equality are propositions, it
can be separated into two implications, of which one is
called a \emph{theorem} and the other its \emph{reciprocal}. Thus whenever
a theorem and its reciprocal are true we have an
equality. A simple theorem gives rise to an implication
whose antecedent is the \emph{hypothesis} and whose consequent is
the \emph{thesis} of the theorem.
It is often said that the hypothesis is the \emph{sufficient condition}%
\index{Condition!Necessary and sufficient} of the thesis, and the
thesis the \emph{necessary condition} of the hypothesis; that is
to say, it is sufficient that the hypothesis be true for the
thesis to be true; while it is necessary that the thesis be true
for the hypothesis to be true also. When a theorem and its
reciprocal are true we say that its hypothesis
is the necessary and sufficient condition%
\index{Condition!Necessary and sufficient} of the thesis; that is to say,
that it is at the same time both cause and consequence.
\section{Principle of Identity}\label{ch:5}
The first principle or axiom
of the algebra of logic is the \emph{principle of identity}, which is
formulated thus:
\begin{axiom}\index{Axioms}
\begin{displaymath}
a < a,
\end{displaymath}
whatever the term $a$ may be.
\end{axiom}
C.~I.: ``All $a$'s are $a$'s'', \emph{i.e.}, any class whatsoever
is contained in itself.
P.~I.: ``$a$ implies $a$'', \emph{i.e.}, any proposition
whatsoever implies itself.
This is the primitive formula of the principle of identity.
By means of the definition of equality, we may deduce from
it another formula which is often wrongly taken as the expression
of this principle:
\begin{displaymath}
a = a,
\end{displaymath}
whatever~$a$ may be; for when we have
\begin{displaymath}
a < a, a < a,
\end{displaymath}
we have as a direct result,
\begin{displaymath}
a = a.
\end{displaymath}
C.~I.: The class~$a$ is identical with itself.
P.~I.: The proposition~$a$ is equivalent to itself.
\section{Principle of the Syllogism}\label{ch:6}
Another principle of the algebra of logic is the principle of the
\emph{syllogism}, which may be formulated as follows:
\begin{axiom}\index{Axioms}
\begin{displaymath}
(a < b) (b < c) < (a < c).
\end{displaymath}
\end{axiom}
C.~I.: ``If all~$a$'s are~$b$'s, and if all~$b$'s are~$c$'s, then all~$a$'s
are~$c$'s''. This is the principle of the \emph{categorical
syllogism}.\index{Categorical syllogism}
P.~I.: ``If~$a$ implies~$b$, and if~$b$ implies~$c$, $a$~implies~$c$.''
This is the principle of the \emph{hypothetical syllogism}.
We see that in this formula the principal copula has always
the sense of implication because the proposition is a
secondary one.
By the definition of equality the consequences%
\index{Consequences!of the syllogism} of the
principle of the syllogism may be stated in the following
formulas\footnote{Strictly speaking, these formulas presuppose the laws of multiplication
which will be established further on; but it is fitting to cite
them here in order to compare them with the principle of the syllogism
from which they are derived.}:
\begin{alignat*}{2}
(a &< b) &\quad (b = c) &< (a < c), \\
(a &= b) &\quad (b < c) &< (a < c), \\
(a &= b) &\quad (b - c) &< (a = c).
\end{alignat*}
The conclusion is an equality only when both premises are equalities.
The preceding formulas can be generalized as follows:
\begin{alignat*}{3}
(a &< b) &\quad (b &< c) &\quad (c < d) &< (a < d), \\
(a &= b) &\quad (b &= c) &\quad (c = d) &< (a = d).
\end{alignat*}
Here we have the two chief formulas of the \emph{sorites}. Many
other combinations may be easily imagined, but we can have
an equality for a conclusion only when all the premises are
equalities. This statement is of great practical value. In a
succession of deductions we must pay close attention to see
if the transition from one proposition to the other takes place
by means of an equivalence or only of an implication. There
is no equivalence between two extreme propositions unless
all intermediate deductions are equivalences; in other words,
if there is one single implication in the chain, the relation
of the two extreme propositions is only that of implication.
\section{Multiplication and Addition}\label{ch:7}
The algebra of logic admits of three operations, \emph{logical
multiplication}, \emph{logical addition},%
\index{Addition!and multiplication!Logical} and \emph{negation}.
The two former are binary operations, that is to say, combinations
of two terms having as a consequent a third term which may or may
not be different from each of them. The existence of the logical
product and logical sum of two terms must necessarily answer the
purpose of a double postulate, for simply to define an entity is
not enough for it to exist. The two postulates may be formulated
thus:
\begin{axiom}\index{Axioms}
Given any two terms,~$a$ and~$b$, then there is a
term~$p$ such that
\begin{displaymath}
p < a, p < b,
\end{displaymath}
and that for every value of~$x$ for which
\begin{displaymath}
x < a, x < b,
\end{displaymath}
we have also
\begin{displaymath}
x < p.
\end{displaymath}
\end{axiom}
\begin{axiom}\index{Axioms}
Given any two terms,~$a$ and~$b$, there exists
a term~$s$ such that
\begin{displaymath}
a < s, b < s,
\end{displaymath}
we have also
\begin{displaymath}
s < x.
\end{displaymath}
\end{axiom}
It is easily proved that the terms~$p$ and~$s$ determined by
the given conditions are unique, and accordingly we can
define \emph{the} product $ab$ and \emph{the} sum $a + b$ as being respectively
the terms~$p$ and~$s$.
C.~I.: 1. The product of two classes is a class~$p$ which
is contained in each of them and which contains every
(other) class contained in each of them;
2. The sum of two classes~$a$ and~$b$ is a class~$s$ which
contains each of them and which is contained in every (other)
class which contains each of them.
Taking the words ``less than'' and ``greater than'' in a metaphorical sense
which the analogy of the relation~$<$ with the mathematical relation of
inequality suggests, it may be said that the product of two classes is the
greatest class contained in both, and the sum of two classes is the
smallest class which contains both.\footnote{According to another analogy
\author{}{Dedekind} designated the logical sum and product by the same
signs as the least common multiple and greatest common divisor (\emph{Was
sind und was sollen die Zahlen?} Nos.~8 and~17, 1887. [Cf.\ English
translation entitled \emph{Essays on Number} (Chicago, Open Court
Publishing Co.\ 1901, pp.~46 and 48)] \author{Cantor, Georg}{Georg
Cantor} originally gave them the same designation (\emph{Mathematische
Annalen}, Vol.~XVII, 1880).} Consequently the product of two
classes is the part that is common to each (the class of their
common elements) and the sum of two classes is the class of all
the elements which belong to at least one of them.
P.~I.: 1. The product of two propositions is a proposition
which implies each of them and which is implied by every
proposition which implies both:
2. The sum of two propositions is the proposition which
is implied by each of them and which implies every proposition
implied by both.
Therefore we can say that the product of two propositions
is their weakest common cause, and that their sum is their
strongest common consequence, strong and weak being used
in a sense that every proposition which implies another is
stronger than the latter and the latter is weaker than the
one which implies it. Thus it is easily seen that the product
of two propositions consists in their \emph{simultaneous affirmation}:
``$a$~and $b$~are true'', or simply~``$a$ and~$b$''; and that their
sum consists in their \emph{alternative affirmation},%
\index{Alternative!affirmation} ``either~$a$ or~$b$
is true'', or simply~``$a$ or~$b$''.
\emph{Remark}.---Logical addition%
\index{Addition!Logical, not disjunctive} thus defined is not disjunctive;%
\footnote{\author{}{Boole,} closely following analogy with
ordinary mathematics, premised, as a necessary condition to the
definition of ``$x + y$'', that~$x$ and~$y$ were mutually
exclusive. \author{}{Jevons,} and practically all mathematical
logicians after him, advocated, on various grounds, the definition
of ``logical addition'' in a form which does not necessitate
mutual exclusiveness.} that is to say, it does not presuppose that
the two summands have no element in common.
\section{Principles of Simplification and Composition}\label{ch:8}
The
two preceding definitions, or rather the postulates which
precede and justify them, yield directly the following formulas:
\begin{gather}
ab < a, \qquad ab < b, \label{eq:simplification1}\\
(x < a) (x < b) < (x < ab), \label{eq:composition1}\\
a < a + b, \qquad b < a + b, \label{eq:simplification2}\\
(a < x) (b < x) < (a + b < x). \label{eq:composition2}
\end{gather}
Formulas~(\ref{eq:simplification1}) and~(\ref{eq:simplification2})
bear the name of the \emph{principle of simplification} because by
means of them the premises of an argument may be simplified by
deducing therefrom weaker propositions, either by deducing one of
the factors from a
product, or by deducing from a proposition a sum (alternative)%
\index{Alternative}
of which it is a summand.
Formulas (\ref{eq:composition1}) and (\ref{eq:composition2}) are
called the \emph{principle of composition},%
\index{Composition!Principle of} because by means of them two inclusions of
the same antecedent or the same consequent may be combined
(\emph{composed}). In the first case we have the product of the
consequents, in the second, the sum of the antecedents.
The formulas of the principle of composition can be transformed
into equalities by means of the principles of the
syllogism and of simplification. Thus we have
\begin{gather*}
\tag*{1 (Syll.)} (x < ab) (ab < a) < (x < a), \\
\tag{Syll.} (x < ab) (ab < b) < (x < b).\\
\intertext{Therefore}
\tag{Comp.} (x < ab) < (x < a) (x < b).\\
\tag*{2 (Syll.)} (a < a + b) (a + b < x) < (a < x),\\
\tag{Syll.} (b < a + b) (a + b < x) < (b < x).\\
\intertext{Therefore}
\tag{Comp.} (a + b < x) < (a < x) (b < x).
\end{gather*}
If we compare the new formulas with those preceding, which are their
converse propositions, we may write
\begin{gather*}
(x < ab) = (x < a) (x < b),\\
(a + b < x) = (a < x) (b < x).
\end{gather*}
Thus, to say that~$x$'s contained in~$ab$ is equivalent to
saying that it is contained at the same time in both~$a$ and~$b$;
and to say that~$x$ contains $a + b$ is equivalent to saying
that it contains at the same time both~$a$ and~$b$.
\section{The Laws of Tautology and of Absorption}\label{ch:9}
Since the definitions of the logical sum and product do not
imply any order among the terms added or multiplied,
logical addition and multiplication%
\index{Addition!and multiplication!Logical} evidently possess commutative
and associative properties which may be expressed in
the formulas
\begin{displaymath}
\begin{aligned}
ab &= ba,\\
(ab) c &= a (bc),\\
\end{aligned} \qquad
\begin{aligned}
a + b &= b + a,\\
(a + b) + c &= a + (b + c).\\
\end{aligned}
\end{displaymath}
Moreover they possess a special property which is expressed
in the \emph{law of tautology:}
\begin{displaymath}
a = aa, \qquad a = a + a.
\end{displaymath}
\emph{Demonstration:}
\begin{gather*}
\tag*{1 (Simpl.)} aa < a,\\
\tag*{(Comp.)} (a < a) (a < a) = (a < aa)\\
\intertext{whence, by the definition of equality,}
(aa < a) (a < aa) = (a - aa)
\end{gather*}
In the same way:
\begin{gather*}
\tag*{2 (Simpl.)} a < a + a,\\
\tag*{(Comp.)} (a < a) (a < a) = (a + a < a),\\
\intertext{whence}
(a < a + a) (a + a < a) = (a = a + a).
\end{gather*}
From this law it follows that the sum or product of any
number whatever of equal (identical) terms is equal to one
single term. Therefore in the algebra of logic there are
neither multiples nor powers, in which respect it is very
much simpler than numerical algebra.%
\index{Algebra!of logic compared to mathematical algebra}
Finally, logical addition and multiplication%
\index{Addition!and multiplication!Logical} posses a
remarkable property which also serves greatly to simplify
calculations, and which is expressed by the \emph{law of absorption:}%
\index{Absorption!Law of}
\begin{displaymath}
a + ab = a, \qquad a (a + b) = a.
\end{displaymath}
\emph{Demonstration:}
\begin{gather*}
\tag*{1 (Comp.)} (a < a) (ab < a) < (a + ab < a),\\
\tag*{(Simpl.)} a < a + ab,\\
\intertext{whence, by the definition of equality,}
(a + ab < a) (a < a + ab) = (a + ab = a).
\end{gather*}
In the same way:
\begin{align*}
\tag*{1 (Comp.)} (a < a) (a < a + b) < [a < a (a + b)],\\
\tag*{(Simpl.)} a (a + b) < a,\\
\intertext{whence}
[a < a (a + b)] [a (a + b) < a] = [a (a + b) = a].
\end{align*}
Thus a term~$(a)$ \emph{absorbs} a summand~$(ab)$ of which it is a
factor, or a factor $(a + b)$ of which it is a summand.
\section{Theorems on Multiplication and Addition}\label{ch:10}
We
can now establish two theorems with regard to the combination
of inclusions and equalities by addition and multiplication:%
\index{Addition!and multiplication!Theorems on}
\begin{theorem}
\begin{displaymath}
(a < b) < (ac < bc), \qquad (a < b) < (a + c < b + c).
\end{displaymath}
\end{theorem}
\emph{Demonstration:}
\begin{gather*}
\tag*{1 (Simpl.)} ac < c, \\
\tag*{(Syll.)} (ac < a) (a < b) < (ac < b), \\
\tag*{(Comp.)} (ac < b) (ac < c) < (ac < bc). \\
\tag*{2 (Simpl.)} c < b + c, \\
\tag*{(Syll.)} (a < b) (b < b + c) < (a < b + c). \\
\tag*{(Comp.)} (a < b + c) (a < b + c) < (a + c < b + c).
\end{gather*}
This theorem may be easily extended to the case of
equalities:
\begin{displaymath}
(a = b) < (ac = bc), \qquad (a = b) < (a + c = b + c).
\end{displaymath}
\begin{theorem}
\begin{gather*}
(a < b) (c < d) < (ac < bd), \\
(a < b) (c < d) < (a + c < b + d).\\
\end{gather*}
\end{theorem}
\emph{Demonstration:}
\begin{align*}
\tag*{1 (Syll.)} (ac < a) (a < b) < (ac < b),\\
\tag*{(Syll.)} (ac < c) (c < d) < (ac < a),\\
\tag*{(Comp.)} (ac < b) (ac < d) < (ac < bd).\\
\tag*{2 (Syll.)} (a < b) (b < b + d) < (a < b + d),\\
\tag*{(Syll.)} (c < d) (d < b + d) < (c < b + d),\\
\tag*{(Comp.)} (a < b + d) (c < b + d) < (a + c < b + d).
\end{align*}
This theorem may easily be extended to the case in which
one of the two inclusions is replaced by an equality:
\begin{gather*}
(a = b) (c < d) < (ac < bd),\\
(a = b) (c < d) < (a + c < b + d).
\end{gather*}
When both are replaced by equalities the result is an
equality:
\begin{align*}
(a = b) (c = d) &< (ac = bd),\\
(a = b) (c = d) &< (a + c = b + d).
\end{align*}
To sum up, two or more inclusions or equalities can be
added or multiplied together member by member; the result
will not be an equality unless all the propositions combined are equalities.
\section{The First Formula for Transforming Inclusions
into Equalities}\label{ch:11}
We can now demonstrate an important
formula by which an inclusion may be transformed into an
equality, or \emph{vice versa}:
\begin{displaymath}
(a < b) = (a = ab) \qquad (a < b) = (a + b = b)
\end{displaymath}
\emph{Demonstration:}
\begin{displaymath}
(a < b) < (a = ab),\qquad (a < b) < (a + b = b).\tag*{1.}
\end{displaymath}
For
\begin{gather*}
\tag{Comp.} (a < a) (a < b) < (a < ab),\\
(a < b) (b < b) < (a + b < b).
\end{gather*}
On the other hand, we have
\begin{gather*}
\tag{Simpl.} ab < a, b < a + b,\\
\tag{Def. =} (a < ab) (ab < a) = (a = ab)\\
(a + b < b) (b < a + b) = (a + b = b).
\end{gather*}
\begin{displaymath}
\tag*{2.} (a = ab) < (a < b),\qquad (a + b = b) < (a < b).
\end{displaymath}
For
\begin{gather*}
(a - ab) (ab < b) < (a < b),\\
(a < a + b) (a + b = b) < (a < b).
\end{gather*}
\emph{Remark}.---If we take the relation of equality as a primitive
idea (one not defined) we shall be able to define the relation of
inclusion by means of one of the two preceding formulas.\footnote{See
\author{}{Huntington,} \emph{op.~cit.}, \S\ref{ch:1}.} We shall then be able
to demonstrate the principle of the syllogism.\footnote{This can be
demonstrated as follows: By definition we have $(a < b) = (a =
ab)$, and $(b < c) = (b = bc)$. If in the first equality we
substitute for~$b$ its value derived from the second equality, then
$a = abc$. Substitute for~$a$ its equivalent $ab$, then $ab = abc$.
This equality is equivalent to the inclusion, $ab < c$.
Conversely substitute~$a$ for $ab$; whence we have $a < c$.
Q.E.D.}
From the preceding formulas may be derived an interesting result:
\begin{displaymath}
(a = b) = (ab = a + b).
\end{displaymath}
For
\begin{gather*}
\tag*{1.} (a = b) = (a < b) (b < a),\\
(a < b) = (a = ab), (b < a) = (a + b = a),\\
\tag{Syll.} (a = ab) (a + b = a) < (ab = a + b).
\end{gather*}
\begin{gather*}
\tag*{2.} (ab=a+b) < (a + b < ab),\\
\tag{Comp.} (a+b < ab) = (a < ab) (b < ab),\\
(a < ab) (ab < a) = (a = ab) = (a < b),\\
(b < ab) (ab < b) = (b=ab) = (b < a),
\end{gather*}
Hence
\begin{displaymath}
(ab = a + b) < (a < b) (b < a) = (a = b).
\end{displaymath}
\section{The Distributive Law}\label{ch:12}
The principles previously
stated make it possible to demonstrate the \emph{converse distributive
law}, both of multiplication with respect to addition, and of
addition with respect to multiplication,
\begin{displaymath}
ac+bc < (a + b) c,\qquad ab+c < (a+c)(b+c).
\end{displaymath}
\emph{Demonstration:}
\begin{gather*}
(a < a+b) < [ac < (a + b)c],\\
(b < a+b) < [bc < (a + b)c];\\
\intertext{whence, by composition,}
[ac < (a+b)c] [bc < (a+b)c] < [ac+bc < (a+b)c]
\end{gather*}
\begin{gather*}
\tag*{2.} (ab < a) < (ab+c < a+c), \\
(ab < b) < (ab+c < b+c),
\end{gather*}
whence, by composition,
\begin{displaymath}
(ab+c < a+c)(ab+c < b+c) < [ab+c < (a+c)(b+c)].
\end{displaymath}
But these principles are not sufficient to demonstrate the
\emph{direct distributive law}
\begin{displaymath}
(a+b)c < ac+bc,\qquad (a+c)(b+c) < ab+c,
\end{displaymath}
and we are obliged to postulate one of these formulas or
some simpler one from which they can be derived. For
greater convenience we shall postulate the formula
\begin{axiom}\index{Axioms}
\begin{displaymath}
(a + b) c < a c + b c.
\end{displaymath}
\end{axiom}
This, combined with the converse formula, produces the equality
\begin{displaymath}
(a+b)c = ac+bc
\end{displaymath}
which we shall call briefly the \emph{distributive law.}
From this may be directly deduced the formula
\begin{displaymath}
(a + b)(c + d) = ac + bc + ad + bd,
\end{displaymath}
and consequently the second formula of the distributive law,
\begin{displaymath}
(a + c) (b + c) = ab + c.
\end{displaymath}
For
\begin{displaymath}
(a + c) (b + c) = ab + ac + bc + c,
\end{displaymath}
and, by the law of absorption,
\begin{displaymath}
ac + bc + c = c.
\end{displaymath}
This second formula implies the inclusion cited above,
\begin{displaymath}
(a + c) (b + c) < ab + c,
\end{displaymath}
which thus is shown to be proved.
\emph{Corollary}.---We have the equality
\begin{displaymath}
ab + ac + bc = (a + b) (a + c) (b + c),
\end{displaymath}
for
\begin{displaymath}
(a + b) (a + c) (b + c) = (a + bc) (b + c) = ab + ac + bc.
\end{displaymath}
It will be noted that the two members of this equality
differ only in having the signs of multiplication and addition
transposed (compare \S\ref{ch:14}).
\section{Definition of 0 and 1}\label{ch:13}
We shall now define and introduce into the logical calculus two
special terms which we shall designate by 0 and by 1, because of some
formal analogies that they present with the zero and unity of
arithmetic. These two terms are formally defined by the two following
principles which affirm or postulate their existence.
\begin{axiom}\index{Axioms}
There is a term~0 such that whatever value
may be given to the term~$x$, we have
\begin{displaymath}
0 < x.
\end{displaymath}
\end{axiom}
\begin{axiom}\index{Axioms}
There is a term~1 such that whatever value
may be given to the term~$x$, we have
\begin{displaymath}
x < 1.
\end{displaymath}
\end{axiom}
It may be shown that each of the terms thus defined is
unique; that is to say, if a second term possesses the same
property it is equal to (identical with) the first.
The two interpretations of these terms give rise to paradoxes which we
shall not stop to elucidate here, but which will be justified by the
conclusions of the theory.\footnote{Compare the
author's\index{Couturat} \emph{Manuel de Logistique}, Chap.~I., \S
8, Paris, 1905 [This work, however, did not appear].}
C.~I.: 0~denotes the class contained in every class; hence it is
the ``null'' or ``void'' class which contains no element (Nothing
or Naught), 1~denotes the class which contains all classes; hence
it is the totality of the elements which are contained within it.
It is called, after \author{}{Boole,} the ``universe of
discourse'' or simply the ``whole''.
P.~I.: 0~denotes the proposition which implies every proposition;
it is the ``false'' or the ``absurd'', for it implies
notably all pairs of contradictory propositions,%
\index{Contradictory!propositions} 1~denotes
the proposition which is implied in every proposition; it is
the ``true'', for the false may imply the true whereas the true
can imply only the true.
By definition we have the following inclusions
\begin{displaymath}
0 < 0,\quad 0 < 1,\quad 1 < 1,
\end{displaymath}
the first and last of which, moreover, result from the principle of
identity. It is important to bear the second in mind.
C.~I.: The null class is contained in the \emph{whole}.\footnote{The
rendering ``Nothing is everything'' must be avoided.}
P.~I.: The false implies the true.
By the definitions of~0 and~1 we have the equivalences
\begin{displaymath}
(a < 0) = (a = 0),\quad (1 < a) = (a = 1),
\end{displaymath}
since we have
\begin{displaymath}
0 < a,\quad a < 1
\end{displaymath}
whatever the value of~$a$.
Consequently the principle of composition%
\index{Composition!Principle of} gives rise to
the two following corollaries:
\begin{gather*}
(a = 0) (b = 0) = (a + b = 0),\\
(a = 1) (b = 1) = (ab = 1).
\end{gather*}
Thus we can combine two equalities having~0 for a second member by
adding their first members, and two equalities having~1 for a
second member by multiplying their first members.
Conversely, to say that a sum is ``null'' [zero] is to say that
each of the summands is null; to say that a product is equal
to 1 is to say that each of its factors is equal to 1.
Thus we have
\begin{align*}
(a + b = 0) &< (a = 0),\\
(ab = 1) &< (a = 1),
\end{align*}
and more generally (by the principle of the syllogism)
\begin{align*}
(a < b) (b = 0) &< (a = 0),\\
(a < b) (a = 1) &< (b = 1).
\end{align*}
It will be noted that we can not conclude from these the
equalities $ab = 0$ and $a + b = 1$. And indeed in the conceptual
interpretation the first equality denotes that the part
common to the classes~$a$ and~$b$ is null; it by no means
follows that either one or the other of these classes is null.
The second denotes that these two classes combined form
the whole; it by no means follows that either one or the
other is equal to the whole.
The following formulas comprising the rules for the calculus
of~0 and~1, can be demonstrated:
\begin{gather*}
a \times 0 = 0, \quad a + 1 = 1,\\
a + 0 = a, \quad a \times 1 = a.
\end{gather*}
For
\begin{align*}
(0 < a) &= (0 = 0 \times a) = (a + 0 = a),\\
(a < 1) &= (a = a \times 1) = (a + 1 = 1).
\end{align*}
Accordingly it does not change a term to add~0 to it or to multiply it
by~1. We express this fact by saying that 0~is the \emph{modulus} of
addition and~1 the \emph{modulus} of multiplication.%
\index{Addition!and multiplication!Modulus of}
On the other hand, the product of any term whatever by~0 is 0~and the
sum of any term whatever with 1~is~1.
These formulas justify the following interpretation of the
two terms:
C.~I.: The part common to any class whatever and to the
null class is the null class; the sum of any class whatever
and of the whole is the whole. The sum of the null class and
of any class whatever is equal to the latter; the part common
to the whole and any class whatever is equal to the latter.
P.~I.: The simultaneous affirmation of any proposition
whatever and of a false proposition is equivalent to the latter
(\emph{i.e.}, it is false); while their alternative affirmation%
\index{Alternative!affirmation} is equal to the former. The
simultaneous affirmation of any proposition whatever and of a true
proposition is equivalent to the former; while their alternative
affirmation is equivalent to the latter (\emph{i.e.}, it is true).
\emph{Remark.}---If we accept the four preceding formulas as
axioms, because of the proof afforded by the double interpretation,
we may deduce from them the paradoxical formulas
\begin{displaymath}
0 < x, \text{ and } x < 1,
\end{displaymath}
by means of the equivalences established above,
\begin{displaymath}
(a - ab) = (a < b) = (a + b = b).
\end{displaymath}
\section{The Law of Duality}\label{ch:14}
We have proved that a perfect
symmetry exists between the formulas relating to multiplication
and those relating to addition. We can pass from one class
to the other by interchanging the signs of addition and
multiplication, on condition that we also interchange the
terms~0 and~1 and reverse the meaning of the sign~< (or
transpose the two members of an inclusion). This symmetry, or
\emph{duality} as it is called, which exists in principles and definitions,
must also exist in all the formulas deduced from them as
long as no principle or definition is introduced which would
overthrow them. Hence a true formula may be deduced
from another true formula by transforming it by the principle
of duality; that is, by following the rule given above. In its
application the \emph{law of duality} makes it possible to replace
two demonstrations by one. It is well to note that this law
is derived from the definitions of addition and multiplication
(the formulas for which are reciprocal by duality)
and not, as is often thought%
\footnote{\label{fn:boole}\author{}{Boole} thus derives it
(\emph{Laws of Thought}, London 1854, Chap.~III, Prop.~IV).}, from
the laws of negation which have not yet been stated. We shall see
that these laws possess the same property and consequently
preserve the duality, but they do not originate it; and duality
would exist even if the idea of negation were not introduced. For
instance, the equality (\S\ref{ch:12})
\begin{displaymath}
ab + ac + bc = (a + b) (a + c) (b + c)
\end{displaymath}
is its own reciprocal by duality, for its two members are
transformed into each other by duality.
It is worth remarking that the law of duality is only
applicable to primary propositions. We call [after \author{}{Boole}]
those propositions \emph{primary} which contain but one copula
($<$ or $=$). We call those propositions \emph{secondary} of which
both members (connected by the copula $<$ or $=$) are primary
propositions, and so on. For instance, the principle of
identity and the principle of simplification are primary propositions,
while the principle of the syllogism and the principle
of composition are secondary propositions.
\section{Definition of Negation}\label{ch:15}
The introduction of the terms 0 and 1 makes it possible for us to
define \emph{negation}. This is a ``uni-nary'' operation which
transforms a single term into another term called its
\emph{negative}.\footnote{[In French] the same word \emph{negation}
denotes both the operation and its result, which becomes equivocal.
The result ought to be denoted by another word, like [the English]
``negative''. Some authors say, ``supplementary'' or ``supplement'',
[\emph{e.g.} \author{}{Boole} and \author{}{Huntington}], Classical logic
makes use of the term ``contradictory'' especially for
propositions.} The negative of $a$ is called not-$a$ and is written
$a'$.\footnote{We adopt here the notation of \author{}{MacColl;}
\author{}{Schröder} indicates not-$a$ by $a_1$ which prevents the
use of indices and obliges us to express them as exponents. The
notation $a'$ has the advantage of excluding neither indices nor
exponents. The notation $\bar{a}$ employed by many authors is
inconvenient for typographical reasons. When the negative affects a
proposition written in an explicit form (with a copula) it is
applied to the copula $<$ or $=$) by a vertical bar ($\nless$) or
$\not=$). The accent can be considered as the indication of a
vertical bar applied to letters.} Its formal definition implies the
following postulate of existence\footnote{\author{}{Boole} follows
Aristotle\index{Aristotle} in usually calling the law of duality the
principle of contradiction%
\index{Contradiction!Principle of} ``which affirms that it is
impossible for any being to possess a quality and at the same time
not to possess it''. He writes it in the form of an equation of
the second degree, $x - x^{2} = 0$, or $x (1 - x) = 0$ in which $1
- x$ expresses the universe less $x$, or not $x$. Thus he regards
the law of duality as derived from negation as stated in
note~\ref{fn:boole} above.}:
\begin{axiom}\index{Axioms}
Whatever the term~$a$ may be, there is also a
term~$a'$ such that we have at the same time
\begin{displaymath}
aa' = 0, \quad a + a' = 1.
\end{displaymath}
It can be proved by means of the following \emph{lemma} that if
a term so defined exists it is unique:
If at the same time
\begin{displaymath}
ac = bc, \quad a + c = b + c,
\end{displaymath}
then
\begin{displaymath}
a = b.
\end{displaymath}
\end{axiom}
\emph{Demonstration.}---Multiplying both members of the second
premise by~$a$, we have
\begin{displaymath}
a + ac = ab + ac.
\end{displaymath}
Multiplying both members by~$b$,
\begin{displaymath}
ab + bc = b + bc.
\end{displaymath}
By the first premise,
\begin{displaymath}
ab + ac = ab + bc.
\end{displaymath}
Hence
\begin{displaymath}
a + ac = b + bc,
\end{displaymath}
which by the law of absorption may be reduced to
\begin{displaymath}
a = b.
\end{displaymath}
\emph{Remark.}---This demonstration rests upon the direct distributive
law. This law cannot, then, be demonstrated by means
of negation, at least in the system of principles which we are
adopting, without reasoning in a circle.
This lemma being established, let us suppose that the same
term~$a$ has two negatives; in other words, let $a'_{1}$ and
$a'_{2}$ be two terms each of which by itself satisfies the
conditions of the definition. We will prove that they are equal.
Since, by hypothesis,
\begin{gather*}
aa'_1 = 0, \quad a + a'_1 = 1,\\
aa'_2 = 0, \quad a + a'_2 = 1,
\end{gather*}
we have
\begin{displaymath}
aa'_1 = a a'_2, \quad a + a'_1 = a + a'_2;
\end{displaymath}
whence we conclude, by the preceding lemma, that
\begin{displaymath}
a'_1 = a'_2.
\end{displaymath}
We can now speak of \emph{the} negative of a term as of a unique
and well-defined term.
The \emph{uniformity} of the operation of negation may be expressed
in the following manner:
If $a = b$, then also $a' = b'$. By this proposition, both
members of an equality in the logical calculus may be
``denied''.
\section{The Principles of Contradiction and of Excluded Middle}
\label{ch:16}\index{Contradiction!Principle of|(}
By definition, a term and its negative verify the
two formulas
\begin{displaymath}
aa' = 0, \quad a + a' = 1,
\end{displaymath}
which represent respectively the \emph{principle of contradiction} and
the \emph{principle of excluded middle}.%
\footnote{As \author{}{Mrs. Ladd-Franklin} has truly remarked
(\author{}{Baldwin,}\index{Baldwin} \emph{Dictionary of Philosophy
and Psychology}, article ``Laws of Thought''), the principle of
\emph{contradiction} is not sufficient to define
\emph{contradictories}; the principle of excluded middle must be
added which equally deserves the name of principle of
contradiction. This is why \author{}{Mrs. Ladd-Franklin} proposes
to call them respectively the \emph{principle of exclusion} and
the \emph{principle of
exhaustion}, inasmuch as, according to the first, two contradictory terms%
\index{Contradictory!terms}
are \emph{exclusive} (the one of the other); and, according to the second, they
are \emph{exhaustive} (of the universe of discourse).}
C.~I.: 1. The classes~$a$ and~$a'$ have nothing in common;
in other words, no element can be at the same time both~$a$
and not-$a$.
2. The classes~$a$ and~$a'$ combined form the whole; in
other words, every element is either~$a$ or not-$a$.
P.~I.: 1. The simultaneous affirmation of the propositions
$a$ and not-$a$ is false; in other words, these two propositions
cannot both be true at the same time.
2. The alternative affirmation%
\index{Alternative!affirmation} of the propositions~$a$ and
not-$a$ is true; in other words, one of these two propositions
must be true.
Two propositions are said to be \emph{contradictory} when one is
the negative of the other; they cannot both be true or false
at the same time. If one is true the other is false; if one
is false the other is true.
This is in agreement with the fact that the terms 0 and 1
are the negatives of each other; thus we have
\begin{displaymath}
0 \times 1 = 0, \quad 0 + 1 = 1.
\end{displaymath}
Generally speaking, we say that two terms are \emph{contradictory}
when one is the negative of the other.%
\index{Contradiction!Principle of|)}
\section{Law of Double Negation}\label{ch:17}
Moreover this reciprocity is general: if a term~$b$ is the negative of the
term~$a$, then the term~$a$ is the negative of the term~$b$. These two
statements are expressed by the same formulas
\begin{displaymath}
ab = 0, \quad a + b = 1,
\end{displaymath}
and, while they unequivocally determine $b$ in terms of $a$, they likewise
determine $a$ in terms of $b$. This is due to the symmetry of these
relations, that is to say, to the commutativity\index{Commutativity} of
multiplication and addition. This reciprocity is expressed by the \emph{law
of double negation}
\begin{displaymath}
(a')' = a,
\end{displaymath}
which may be formally proved as follows: $a'$ being by hypothesis
the negative of $a$, we have
\begin{displaymath}
a a' = 0, \quad a + a' = 1.
\end{displaymath}
On the other hand, let $a''$ be the negative of $a'$; we have,
in the same way,
\begin{displaymath}
a' a'' = 0, \quad a' + a'' = 1.
\end{displaymath}
But, by the preceding lemma, these four equalities involve
the equality
\begin{displaymath}
a = a''.
\end{displaymath}
Q. E. D.
This law may be expressed in the following manner:
If $b = a'$, we have $a = b'$, and conversely, by symmetry.
This proposition makes it possible, in calculations, to
transpose the negative from one member of an equality to
the other.
The law of double negation makes it possible to conclude
the equality of two terms from the equality of their negatives
(if $a' = b'$ then $a = b$), and therefore to cancel the negation
of both members of an equality.
From the characteristic formulas of negation together with
the fundamental properties of~0 and~1, it results that every
product which contains two contradictory factors is null, and
that every sum which contains two contradictory summands
is equal to~1.
In particular, we have the following formulas:
\begin{displaymath}
a = ab + ab', \quad a = (a + b) (a + b'),
\end{displaymath}
which may be demonstrated as follows by means of the
distributive law:
\begin{gather*}
a = a \times 1 = a (b + b') = ab + ab', \\
a = a + 0 = a + bb' = (a + b) (a + b').
\end{gather*}
These formulas indicate the principle of the method of
development which we shall explain in detail later (\S\S\ref{ch:21} sqq.)
\section{Second Formulas for Transforming Inclusions
into Equalities}\label{ch:18}
We can now establish two very important
equivalences between inclusions and equalities:
\begin{displaymath}
(a < b) = (ab' = 0), \quad (a < b) = (a' + b = 1).
\end{displaymath}
\emph{Demonstration}.---1. If we multiply the two members of the
inclusion $a < b$ by $b'$ we have
\begin{displaymath}
(ab' < bb') = (ab' < 0) = (ab' = 0).
\end{displaymath}
2. Again, we know that
\begin{displaymath}
a = ab + ab'.
\end{displaymath}
Now if $ab' = 0$,
\begin{displaymath}
a = ab + 0 = ab.
\end{displaymath}
On the other hand: 1. Add $a'$ to each of the two members
of the inclusion $a < b$; we have
\begin{displaymath}
(a' + a < a' + b) = (1 < a' + b) = a' + b = 1).
\end{displaymath}
2. We know that
\begin{displaymath}
b = (a + b)(a' + b).
\end{displaymath}
Now, if $a' + b = 1$,
\begin{displaymath}
b = (a + b) \times 1 = a + b.
\end{displaymath}
By the preceding formulas, an inclusion can be transformed
at will into an equality whose second member is either 0 or 1.
Any equality may also be transformed into an equality of
this form by means of the following formulas:
\begin{displaymath}
(a = b) = (ab' + a'b = 0), \quad (a = b) = [(a + b')(a' + b) = 1].
\end{displaymath}
\emph{Demonstration:}
\begin{gather*}
(a = b) = (a < b)(b < a) = (ab' = 0)(a'b = 0) = (ab' + a'b = 0),\\
(a = b) = (a < b)(b < a) = (a' + b = 1)(a + b' = 1) =
[(a' + b')(a' + b) = 1].
\end{gather*}
Again, we have the two formulas
\begin{displaymath}
(a = b) = [(a + b)(a' + b') = 0], \quad (a = b) = (ab + a'b' = 1),
\end{displaymath}
which can be deduced from the preceding formulas by performing
the indicated multiplications (or the indicated additions)
by means of the distributive law.
\section{The Law of Contraposition}
\label{ch:19}
We are now able to demonstrate the \emph{law of contraposition},%
\index{Contraposition!Law of}
\begin{displaymath}
(a < b) = (b' < a').
\end{displaymath}
\emph{Demonstration}.---By the preceding formulas, we have
\begin{displaymath}
(a < b) = (ab = 0) = (b' < a').
\end{displaymath}
Again, the law of contraposition may take the form
\begin{displaymath}
(a < b') = (b < a'),
\end{displaymath}
which presupposes the law of double negation. It may be
expressed verbally as follows: ``Two members of an inclusion
may be interchanged on condition that both are denied''.
C.~I.: ``If all $a$ is $b$, then all not-$b$ is not-$a$, and conversely''.
P.~I.: ``If $a$ implies $b$, not-$b$ implies not-$a$ and conversely'';
in other words, ``If $a$ is true $b$ is true'', is equivalent to
saying, ``If $b$ is false, $a$ is false''.
This equivalence is the principle of the \emph{reductio ad absurdum}
(see hypothetical arguments, \emph{modus tollens}, \S\ref{ch:58}).
\section{Postulate of Existence}\label{ch:20}
One final axiom may be
formulated here, which we will call the \emph{postulate of existence}:
\begin{axiom}\label{axiom:IX}\index{Axioms}
\begin{displaymath}
1 \nless 0
\end{displaymath}
\end{axiom}
whence may be also deduced $1 \neq 0$.
In the conceptual interpretation (C.~I.) this axiom means
that the universe of discourse is not null, that is to say, that
it contains some elements, at least one. If it contains but
one, there are only two classes possible, $1$ and $0$. But even
then they would be distinct, and the preceding axiom would
be verified.
In the propositional interpretation (P.~I.) this axiom signifies
that the true and false are distinct; in this case, it bears
the mark of evidence and necessity. The contrary
proposition, $1 = 0$, is, consequently, the type of \emph{absurdity}%
\index{Absurdity!Type of}
(of the formally false proposition) while the propositions $0 = 0$,
and $1 = 1$ are types of \emph{identity} (of the formally true proposition).
Accordingly we put
\begin{displaymath}
(1 = 0) = 0, \quad (0 = 0) = (l = 1) = 1.
\end{displaymath}
More generally, every equality of the form
\begin{displaymath}
x = x
\end{displaymath}
is equivalent to one of the identity-types; for, if we reduce
this equality so that its second member will be~$0$ or~$1$, we find
\begin{displaymath}
(xx' + xx' = 0) = (0 = 0), \quad (xx + x'x' = 1) = (1 = 1).
\end{displaymath}
On the other hand, every equality of the form
\begin{displaymath}
x = x'
\end{displaymath}
is equivalent to the absurdity-type, for we find by the same
process,
\begin{displaymath}
(xx + x'x' = 0) = (1 = 0), \quad (xx' + xx' = 1) = (0 = 1).
\end{displaymath}
\section{The Development of~0 and of~1}\label{ch:21}
Hitherto we
have met only such formulas as directly express customary
modes of reasoning and consequently offer direct evidence.
We shall now expound theories and methods which depart
from the usual modes of thought and which constitute more
particularly the algebra of logic in so far as it is a formal
and, so to speak, automatic method of an absolute universality
and an infallible certainty, replacing reasoning by calculation.
The fundamental process of this method is \emph{development}.
Given the terms $a, b, c \ldots$ (to any finite number), we can
develop 0 or 1 with respect to these terms (and their negatives)
by the following formulas derived from the distributive law:
\begin{align*}
0 &= aa',\\
0 &= aa' + bb' = (a + b) (a + b') (a' + b) (a' + b'),\\
0 &= aa' + bb' + cc' =
\begin{aligned}[t]
(a &+ b + c) (a + b + c') (a + b' + c)\\
&\times (a + b' + c') (a' + b + c)\\
&\times (a' + b + c') (a' + b' + c) (a' + b' + c');\\
\end{aligned}\\
1 &= a + a',\\
1 &= (a + a') (b + b') = ab + ab' + a'b + a'b',\\
1 &= (a + a') (b + b') (c + c')
\begin{aligned}[t]
&= abc + abc' + ab'c + ab'c'\\
&+ a'bc + a'bc' + a'b'c + a'b'c';\\
\end{aligned}
\end{align*}
and so on. In general, for any number~$n$ of simple terms;
0~will be developed in a product containing $2^n$ factors, and
1~in a sum containing $2^n$ summands. The factors of zero
comprise all possible additive combinations, and the summands
of~1 all possible multiplicative combinations of the~$n$ given
terms and their negatives, each combination comprising~$n$
different terms and never containing a term and its negative
at the same time.
The summands of the development of~1 are what \author{}{Boole} called
the \emph{constituents}\index{Constituents} (of the universe of
discourse). We may equally well call them, with
\author{}{Poretsky,}\footnote{See the Bibliography, page xiv.} the
\emph{minima} of discourse, because they are the smallest classes
into which the universe of discourse is divided with reference to
the~$n$ given terms. In the same way we shall call the factors of
the development of~0 the \emph{maxima} of discourse, because they
are the largest classes that can be determined in the universe of
discourse by means of the $n$ given terms.
\section{Properties of the Constituents}
\label{ch:22}\index{Constituents!Properties of} The constituents
or minima of discourse possess two properties characteristic of
contradictory terms (of which they are a generalization); they are
\emph{mutually exclusive}, \emph{i.e.}, the product of any two of
them is~0; and they are \emph{collectively exhaustive},
\emph{i.e.}, the sum of all ``exhausts'' the universe of
discourse. The latter property is evident from the preceding
formulas. The other results from the fact that any two
constituents differ at least in the ``sign'' of one of the terms
which serve as factors, \emph{i.e.}, one contains this term as a
factor and the other the negative of this term. This is enough, as
we know, to ensure that their product be null.
The maxima of discourse possess analogous and correlative
properties; their combined product is equal to~0, as we have
seen; and the sum of any two of them is equal to~1, inasmuch
as they differ in the sign of at least one of the terms which
enter into them as summands.
For the sake of simplicity, we shall confine ourselves, with
\author{}{Boole} and \author{}{Schröder,} to the study of the constituents or
minima of discourse, \emph{i.e.}, the developments of~1. We shall
leave to the reader the task of finding and demonstrating the
corresponding theorems which concern the maxima of discourse or
the developments of~0.
\section{Logical Functions}\label{ch:23}
We shall call a \emph{logical function}
any term whose expression is complex, that is, formed of
letters which denote simple terms together with the signs of
the three logical operations.\footnote{In this algebra the logical function is analogous to the \emph{integral
function} of ordinary algebra, except that it has no powers beyond
the first.}
A logical function may be considered as a function of all
the terms of discourse, or only of some of them which may
be regarded as unknown or variable and which in this case
are denoted by the letters $x, y, z$. We shall represent a
function of the variables or unknown quantities, $x, y, z$, by
the symbol $f(x, y, z)$ or by other analogous symbols, as in
ordinary algebra. Once for all, a logical function may be
considered as a function of any term of the universe of discourse,
whether or not the term appears in the explicit expression
of the function.
\section{The Law of Development}\label{ch:24}
This being established,
we shall proceed to develop a function $f(x)$ with respect to~$x$.
Suppose the problem solved, and let
\begin{displaymath}
ax + bx'
\end{displaymath}
be the development sought. By hypothesis we have the
equality
\begin{displaymath}
f(x) = ax + bx'
\end{displaymath}
for all possible values of~$x$. Make $x = 1$ and consequently
$x' = 0$. We have
\begin{displaymath}
f(1) = a.
\end{displaymath}
Then put $x = 0$ and $x' = 1$; we have
\begin{displaymath}
f(0) = b.
\end{displaymath}
These two equalities determine the coefficients~$a$ and~$b$ of
the development which may then be written as follows:
\begin{displaymath}
f(x) = f(1)x + f(0)x',
\end{displaymath}
in which $f(1)$, $f(0)$ represent the value of the function $f(x)$
when we let $x = 1$ and $x = 0$ respectively.
\emph{Corollary.}---Multiplying both members of the preceding
equalities by~$x$ and~$x'$ in turn, we have the following pairs
of equalities (\author{}{MacColl}):
\begin{alignat*}{2}
xf(x) &= ax &\quad x'f(x) &= bx'\\
xf(x) &= xf(1), &\quad x'f(x) &= x'f(0).
\end{alignat*}
Now let a function of two (or more) variables be developed with
respect to the two variables~$x$ and~$y$. Developing $f(x, y)$
first with respect to~$x$, we find
\begin{displaymath}
f(x, y) = f(1, y)x + f(0, y)x'.
\end{displaymath}
Then, developing the second member with respect to~$y$,
we have
\begin{displaymath}
f(x, y) = f(1, 1)xy + f(1, 0)xy' + f(0, 1)x'y + f(0, 0)x'y'
\end{displaymath}
This result is symmetrical with respect to the two variables,
and therefore independent of the order in which the developments
with respect to each of them are performed.
In the same way we can obtain progressively the development
of a function of $3, 4, \ldots\ldots$, variables.
The general law of these developments is as follows:
To develop a function with respect to $n$ variables, form all
the constituents of these $n$ variables and multiply each of
them by the value assumed by the function when each of
the simple factors of the corresponding constituent is equated
to~1 (which is the same thing as equating to~0 those factors
whose negatives appear in the constituent).
When a variable with respect to which the development is
made, $y$~for instance, does not appear explicitly in the
function ($f(x)$ for instance), we have, according to the
general law,
\begin{displaymath}
f(x) = f(x)y + f(x)y'.
\end{displaymath}
In particular, if $a$ is a constant term, independent of the
variables with respect to which the development is made,
we have for its successive developments,
\begin{align*}
a &= ax + ax',\\
a &= axy + axy' + ax'y + ax'y',\\
a &= \begin{aligned}[t]
axyz &+ axyz' + axy'z + axy'z' + ax'yz + ax'yz' + ax'y'z\\
&+ ax'y'z'.\footnotemark\index{Classification of dichotomy}
\end{aligned}
\end{align*}
\footnotetext{These formulas express the method of classification
by dichotomy.} and so on. Moreover these formulas may be directly
obtained by multiplying by~$a$ both members of each development
of~1.
\emph{Cor}. 1. We have the equivalence
\begin{displaymath}
(a + x') (b + x) = ax + bx + ab = ax + bx'.
\end{displaymath}
For, if we develop with respect to $x$, we have
\begin{displaymath}
ax + bx' + abx + abx' = (a + ab)x + (b + ab)x' = ax + bx'.
\end{displaymath}
\emph{Cor}. 2. We have the equivalence
\begin{displaymath}
ax + bx' + c = (a + c)x + (b + c)x'.
\end{displaymath}
For if we develop the term $c$ with respect to $x$, we find
\begin{displaymath}
ax + bx' + cx + cx' = (a + c)x + (b + c)x'.
\end{displaymath}
Thus, when a function contains terms (whose sum is
represented by~$c$) independent of~$x$, we can always reduce it
to the developed form $ax + bx'$ by adding~$c$ to the coefficients
of both~$x$ and~$x'$. Therefore we can always consider a
function to be reduced to this form.
In practice, we perform the development by multiplying
each term which does not contain a certain letter ($x$ for
instance) by $(x + x')$ and by developing the product according
to the distributive law. Then, when desired, like terms may
be reduced to a single term.
\section{The Formulas of De Morgan}
\label{ch:25}\index{De Morgan!Formulas of|(}
\emph{In any development of 1, the sum of a certain number of
constituents is the negative of the sum of all the others.}
For, by hypothesis, the sum of these two sums is equal to~1, and their
product is equal to~0, since the product of two different constituents
is zero.
From this proposition may be deduced the formulas of
\author{}{De Morgan:}
\begin{displaymath}
(a + b)' = a'b', \quad (ab)' = a' + b'.
\end{displaymath}
\emph{Demonstration}.---Let us develop the sum $(a + b)$:
\begin{displaymath}
a + b = ab + ab' + ab + a'b = ab + ab' + a'b.
\end{displaymath}
Now the development of~1 with respect to~$a$ and~$b$ contains
the three terms of this development plus a fourth term $a'b'$.
This fourth term, therefore, is the negative of the sum of the
other three.
We can demonstrate the second formula either by a correlative
argument (\emph{i.e.}, considering the development of 0 by
factors) or by observing that the development of $(a' + b')$,
\begin{displaymath}
a'b + ab' + a'b',
\end{displaymath}
differs from the development of 1 only by the summand $ab$.
How \author{}{De Morgan's} formulas may be generalized is now
clear; for instance we have for a sum of three terms,
\begin{displaymath}
a + b + c = abc + abc' + ab'c + ab'c'+ a'bc + a'bc' + a'b'c.
\end{displaymath}
This development differs from the development of 1 only
by the term $a'b'c'$. Thus we can demonstrate the formulas
\begin{displaymath}
(a + b + c)' = a'b'c', \quad (abc)' = a' + b' + c',
\end{displaymath}
which are generalizations of \author{}{De Morgan's} formulas.
The formulas of \author{}{De Morgan} are in very frequent use in
calculation, for they make it possible to perform the negation
of a sum or a product by transferring the negation to the
simple terms: the negative of a sum is the product of the
negatives of its summands; the negative of a product is the
sum of the negatives of its factors.
These formulas, again, make it possible to pass from a primary
proposition to its correlative proposition by duality, and to
demonstrate their equivalence. For this purpose it is only necessary
to apply the law of contraposition%
\index{Contraposition!Law of} to the given proposition, and then
to perform the negation of both members.
\emph{Example:}
\begin{displaymath}
ab + ac + bc = (a + b) (a + c) (b + c).
\end{displaymath}
\emph{Demonstration:}
\begin{align*}
(ab + ac + bc)' &= [(a + b) (a + c) (b + c)], \\
(ab)'(ac)'(bc)' &= (a + b)'+(a + c)' + (b + c)', \\
(a' + b') (a'+ c') (b' + c') &= a'b' + a'c' + b'c'.
\end{align*}
Since the simple terms, $a, b, c$, may be any terms, we may
suppress the sign of negation by which they are affected, and
obtain the given formula.
Thus \author{}{De Morgan's} formulas furnish a means by which to
find or to demonstrate the formula correlative to another; but, as
we have said above (\S\ref{ch:14}), they are not the basis of
this correlation.%
\index{De Morgan!Formulas of|)}
\section{Disjunctive Sums}\label{ch:26}
By means of development we can transform any sum into a
\emph{disjunctive} sum, \emph{i.e.}, one in which each product of
its summands taken two by two is zero. For, let $(a + b + c)$ be a
sum of which we do not know whether or not the three terms are
disjunctive; let us assume that they are not. Developing, we have:
\begin{displaymath}
a + b + c = abc + abc' + ab'c + ab'c' + a'bc + a'bc' + a'b'c.
\end{displaymath}
Now, the first four terms of this development constitute
the development of a with respect to $b$ and $c$; the two
following are the development of $a'b$ with respect to $c$. The
above sum, therefore, reduces to
\begin{displaymath}
a + a'b + a'b'c,
\end{displaymath}
and the terms of this sum are disjunctive like those of the
preceding, as may be verified. This process is general and,
moreover, obvious. To enumerate without repetition all the
$a$'s, all the $b$'s, and all the $c$'s, etc., it is clearly sufficient to
enumerate all the $a$'s, then all the $b$'s which are not $a$'s, and
then all the $c$'s which are neither $a$'s nor $b$'s, and so on.
It will be noted that the expression thus obtained is not
symmetrical, since it depends on the order assigned to the
original summands. Thus the same sum may be written:
\begin{displaymath}
b + ab' + a'b'c, \quad c + ac' + a'bc', \ldots.
\end{displaymath}
Conversely, in order to simplify the expression of a sum,
we may suppress as factors in each of the summands (arranged
in any suitable order) the negatives of each preceding summand.
Thus, we may find a symmetrical expression for a
sum. For instance,
\begin{displaymath}
a + a'b = b + ab' = a + b.
\end{displaymath}
\section{Properties of Developed Functions}\label{ch:27}
The practical
utility of the process of development in the algebra of logic
lies in the fact that developed functions possess the following
property:
The sum or the product of two functions developed with respect to
the same letters is obtained simply by finding the sum or the
product of their coefficients. The negative of a developed
function is obtained simply by replacing the coefficients of its
development by their negatives.
We shall now demonstrate these propositions in the case
of two variables; this demonstration will of course be of
universal application.
Let the developed functions be
\begin{gather*}
a_1 xy + b_1 xy' + c_1 x'y + d_1 x'y',\\
a_2 xy + b_2 xy' + c_2 x'y + d_2 x'y'.
\end{gather*}
1. I say that their sum is
\begin{displaymath}
(a_1 + a_2) xy + (b_1 + b_2) xy' + (c_1 + c_2) x'y + (d_1 + d_2) x'y'.
\end{displaymath}
This result is derived directly from the distributive law.
2. I say that their product is
\begin{displaymath}
a_1 a_2 xy + b_1 b_2 xy' + c_1 c_2 x'y + d_1 d_2 x'y',
\end{displaymath}
for if we find their product according to the general rule
(by applying the distributive law), the products of two terms
of different constituents will be zero; therefore there will remain
only the products of the terms of the same constituent, and,
as (by the law of tautology) the product of this constituent
multiplied by itself is equal to itself, it is only necessary to
obtain the product of the coefficients.
3. Finally, I say that the negative of
\begin{displaymath}
axy + bxy' + cx'y + dx'y'
\end{displaymath}
is
\begin{displaymath}
a'xy + b'xy' + c'x'y + d'x'y'.
\end{displaymath}
In order to verify this statement, it is sufficient to prove
that the product of these two functions is zero and that their
sum is equal to 1. Thus
\begin{gather*}
\begin{split}
(axy &+ bxy' + cx'y + dx'y') (a'xy + b'xy' + c'x'y + d'x'y')\\
&= (aa'xy + bb'xy' + cc'x'y + dd'x'y')\\
&= (0 \cdot xy + 0 \cdot xy' + 0 \cdot x'y + 0 \cdot x'y') = 0\\
(axy &+ bxy' + cx'y + dx'y') + (a'xy + b'xy' + c'x'y + d'x'y')\\
&= [(a + a') xy + (b + b') xy' + (c + c') x'y + (d + d') x'y']\\
&= (1xy + 1xy' + 1x'y + 1x'y') = 1.
\end{split}
\end{gather*}
\emph{Special Case}.---We have the equalities:
\begin{align*}
(ab + a'b')' &= ab' + a'b,\\
(ab'+ a'b')' &= ab + a'b',
\end{align*}
which may easily be demonstrated in many ways; for instance, by
observing that the two sums $(ab + a'b')$ and $(ab'+a'b)$ combined
form the development of~1; or again by \emph{performing} the
negation $(ab + a'b')'$ by means of \author{}{De Morgan's}
formulas~(\S\ref{ch:25}).
From these equalities we can deduce the following equality:
\begin{displaymath}
(ab'+ a b = 0) = (ab + a'b'= 1),
\end{displaymath}
which result might also have been obtained in another way
by observing that~(\S\ref{ch:18})
\begin{displaymath}
(a = b) = (ab'+ a'b = 0) = [(a + b') (a'+ b) = 1],
\end{displaymath}
and by performing the multiplication indicated in the last
equality.
\textsc{Theorem}.---\emph{We have the following equivalences:}%
\footnote{\author{}{W. Stanley Jevons,} \emph{Pure Logic}, 1864,
p.~61.}
\begin{displaymath}
(a = bc' + b'c) = (b = ac' + a'c) = (c = ab'+ a'b).
\end{displaymath}
For, reducing the first of these equalities so that its second
member will be~0,
\begin{align*}
a(bc + b'c') + a' (bc'+ b'c) &= 0,\\
abc + ab'c' + a'bc' + a'b'c &= 0.
\end{align*}
Now it is clear that the first member of this equality is
symmetrical with respect to the three terms $a, b, c$. We may
therefore conclude that, if the two other equalities which differ
from the first only in the permutation of these three letters
be similarly transformed, the same result will be obtained,
which proves the proposed equivalence.
\emph{Corollary}.---If we have at the same time the three inclusions:
\begin{displaymath}
a < bc'+b'c, \quad b< ac'+a'c, \quad c< ab'+a'b.
\end{displaymath}
we have also the converse inclusion, an therefore the
corresponding equalities
\begin{displaymath}
a=bc'+b'c, \quad b=ac'+a'c, \quad c=ab'+a'b.
\end{displaymath}
For if we transform the given inclusions into equalities, we
shall have
\begin{displaymath}
abc + ab'c' = 0, \quad abc + a'bc' = 0, \quad abc + a'b'c = 0,
\end{displaymath}
whence, by combining them into a single equality,
\begin{displaymath}
abc + ab'c' + a'bc' + a'b'c = 0.
\end{displaymath}
Now this equality, as we see, is equivalent to any one of
the three equalities to be demonstrated.
\section{The Limits of a Function}\label{ch:28}
A term~$x$ is said to be
\emph{comprised} between two given terms,~$a$ and~$b$, when it contains
one and is contained in the other; that is to say, if we have,
for instance,
\begin{displaymath}
a < x, \quad x < b,
\end{displaymath}
which we may write more briefly as
\begin{displaymath}
a < x < b.
\end{displaymath}
Such a formula is called a \emph{double inclusion}. When the
term~$x$ is variable and always comprised between two
constant terms $a$ and $b$, these terms are called the \emph{limits}
of $x$. The first (contained in $x$) is called \emph{inferior limit}; the
second (which contains $x$) is called the \emph{superior limit}.
\author{}{Theorem.}---\emph{A developed function is comprised between the sum
and the product of its coefficients.}
We shall first demonstrate this theorem for a function of
one variable,
\begin{displaymath}
ax + bx'.
\end{displaymath}
We have, on the one hand,
\begin{align*}
(ab < a) &< (abx < ax),\\
(ab < b) &< (abx' < bx').
\end{align*}
Therefore
\begin{displaymath}
abx + abx' < ax + bx',
\end{displaymath}
or
\begin{displaymath}
ab < ax + bx'.
\end{displaymath}
On the other hand,
\begin{align*}
(a < a + b) &< [ax < (a + b)x],\\
(b < a + b) &< [bx' < (a + b)x'].
\end{align*}
Therefore
\begin{displaymath}
ax + bx' < (a + b) (x + x'),
\end{displaymath}
or
\begin{displaymath}
ax + bx' < a + b.
\end{displaymath}
To sum up,
\begin{displaymath}
ab < ax + bx' < a + b.
\end{displaymath}
Q. E. D.
\emph{Remark} 1. This double inclusion may be expressed in the
following form:%
\footnote{\author{}{Eugen Müller,} \emph{Aus der Algebra der
Logik}, Art. II.}
\begin{displaymath}
f(b) < f(x) < f(a).
\end{displaymath}
For
\begin{gather*}
f(a) = aa + ba' = a + b,\\
f(b) = ab + bb' = ab.
\end{gather*}
But this form, pertaining as it does to an equation of one
unknown quantity, does not appear susceptible of generalization,
whereas the other one does so appear, for it is readily seen
that the former demonstration is of general application.
Whatever the number of variables $n$ (and consequently the
number of constituents $2^{n}$) it may be demonstrated in exactly
the same manner that the function contains the product of
its coefficients and is contained in their sum. Hence the
theorem is of general application.
\emph{Remark} 2.---This theorem assumes that all the constituents
appear in the development, consequently those that are wanting
must really be present with the coefficient~0. In this case,
the product of all the coefficients is evidently~0. Likewise
when one coefficient has the value~1, the sum of all the
coefficients is equal to~1.
It will be shown later (\S\ref{ch:38}) that a function may reach
both its limits, and consequently that they are its extreme
values. As yet, however, we know only that it is always
comprised between them.
\section{Formula of Poretsky.%
\protect\footnote{\author{}{Poretsky,} ``Sur les méthodes pour
résoudre les égalités logiques''. (\emph{Bull. de la Soc.
phys.-math. de Kazan}, Vol. II, 1884).}}\label{ch:29}
We have the equivalence
\begin{displaymath}
(x = ax + bx') = (b < x < a).
\end{displaymath}
\emph{Demonstration.}---First multiplying by~$x$ both members of
the given equality [which is the first member of the entire
secondary equality], we have
\begin{displaymath}
x = ax,
\end{displaymath}
which, as we know, is equivalent to the inclusion
\begin{displaymath}
x < a.
\end{displaymath}
Now multiplying both members by $x'$, we have
\begin{displaymath}
0 = bx',
\end{displaymath}
which, as we know, is equivalent to the inclusion
\begin{displaymath}
b < x.
\end{displaymath}
Summing up, we have
\begin{displaymath}
(x = ax + bx') < (b < x < a).
\end{displaymath}
Conversely,
\begin{displaymath}
(b < x < a) < (x = ax + bx').
\end{displaymath}
For
\begin{align*}
(x < a) &= (x = ax),\\
(b < x) &= (bx' = 0).
\end{align*}
Adding these two equalities member to member [the second
members of the two larger equalities],
\begin{displaymath}
(x = ax) (o = bx) < (x = ax + bx').
\end{displaymath}
Therefore
\begin{displaymath}
(b < x < a) < (x = ax + bx')
\end{displaymath}
and thus the equivalence is proved.
\section{Schröder's Theorem.%
\protect\footnote{\author{}{Schröder,} \emph{Operationskreis des Logikkalküls} (1877), Theorem 20.}}%
\label{ch:30}
The equality
\begin{displaymath}
ax + bx' = 0
\end{displaymath}
signifies that~$x$ lies between~$a'$ and~$b$.
\emph{Demonstration:}
\begin{align*}
(ax + bx' = 0) &= (ax = 0) (bx' = 0),\\
(ax = 0) &= (x < a'),\\
(bx' = 0) &= (b < x).
\end{align*}
Hence
\begin{displaymath}
(ax + bx' = 0) = (b < x < a').
\end{displaymath}
Comparing this theorem with the formula of \author{}{Poretsky,} we
obtain at once the equality
\begin{displaymath}
(ax + bx' = 0) = (x = a' x + bx'),
\end{displaymath}
which may be directly proved by reducing the formula of
\author{}{Poretsky} to an equality whose second member is~0, thus:
\begin{displaymath}
(x = a'x + bx') = [x (ax + b'x') + x' (a'x + bx') = 0] = (ax + bx' = 0).
\end{displaymath}
If we consider the given equality as an \emph{equation} in which
$x$~is the unknown quantity, \author{}{Poretsky's} formula will be
its solution.
From the double inclusion
\begin{displaymath}
b < x < a'
\end{displaymath}
we conclude, by the principle of the syllogism, that
\begin{displaymath}
b < a'
\end{displaymath}
This is a consequence of the given equality and is independent
of the unknown quantity~$x$. It is called the
\emph{resultant of the elimination} of~$x$ in the given equation. It is
equivalent to the equality
\begin{displaymath}
ab = 0.
\end{displaymath}
Therefore we have the implication
\begin{displaymath}
(ax + bx' = 0) < (ab = 0).
\end{displaymath}
Taking this consequence into consideration, the solution
may be simplified, for
\begin{displaymath}
(ab = 0) = (b = a'b).
\end{displaymath}
Therefore
\begin{displaymath}
\begin{split}
x &= a'x + bx' = a'x + a'bx'\\
&= a'bx + a'b'x + a'bx' = a'b + a'b'x\\
& = b + a'b'x + b + a'x.\\
\end{split}
\end{displaymath}
This form of the solution conforms most closely to common sense:
since~$x'$ contains~$b$ and is contained in~$a'$, it is natural
that~$x$ should be equal to the sum of~$b$ and a part of~$a'$
(that is to say, the part common to~$a'$ and~$x$). The solution is
generally indeterminate (between the limits~$a'$ and~$b$); it is
determinate only when the limits are equal,
\begin{displaymath}
a' = b,
\end{displaymath}
for then
\begin{displaymath}
x = b + a'x = b + bx = b = a'.
\end{displaymath}
Then the equation assumes the form
\begin{displaymath}
(ax + a'x' = 0) = (a' = x)
\end{displaymath}
and is equivalent to the double inclusion
\begin{displaymath}
(a' < x < a') = (x = a').
\end{displaymath}
\section{The Resultant of Elimination}\label{ch:31}
When $ab$ is not zero, the equation is impossible (always false), because
it has a false consequence. It is for this reason that \author{}{Schröder}
considers the resultant of the elimination as a \emph{condition} of the
equation. But we must not be misled by this equivocal word. The resultant
of the elimination of $x$ is not a \emph{cause}\index{Cause} of
the equation, it is a \emph{consequence}\index{Consequence} of it; it is not a \emph{sufficient}%
\index{Condition!Necessary but not sufficient} but a \emph{necessary}
condition.
The same conclusion may be reached by observing that
$ab$ is the inferior limit of the function $ax + bx'$, and that
consequently the function can not vanish unless this limit is~0.
\begin{displaymath}
(ab < ax + bx') (ax + bx' = 0) < (ab = 0).
\end{displaymath}
We can express the resultant of elimination in other equivalent
forms; for instance, if we write the equation in the form
\begin{displaymath}
(a + x') (b + x) = 0,
\end{displaymath}
we observe that the resultant
\begin{displaymath}
ab = 0
\end{displaymath}
is obtained simply by dropping the unknown quantity (by
suppressing the terms~$x$ and~$x'$). Again the equation may be
written:
\begin{displaymath}
a'x + b'x' = 1
\end{displaymath}
and the resultant of elimination:
\begin{displaymath}
a' + b' = 1.
\end{displaymath}
Here again it is obtained simply by dropping the unknown
quantity.%
\footnote{This is the method of elimination of Mrs.
\author{}{Ladd-Franklin} and Mr. \author{}{Mitchell,} but this
rule is deceptive in its apparent simplicity, for it cannot be
applied to the same equation when put in either of the forms
\begin{displaymath}
ax + bx' = 0, \quad (a' + x') (b' +x) = 1.
\end{displaymath}
Now, on the other hand, as we shall see (\S\ref{ch:54}), for inequalities it
may be applied to the forms
\begin{displaymath}
ax + bx' \neq 0, \quad (a' + x') (b' + x) \neq 1.
\end{displaymath}
and not to the equivalent forms
\begin{displaymath}
(a + x') (b + x) \neq 0, \quad a'x + b'x' \neq 1.
\end{displaymath}
Consequently, it has not the mnemonic property attributed to it, for, to
use it correctly, it is necessary to recall to which forms it is applicable.}
\emph{Remark}. If in the equation
\begin{displaymath}
ax + bx' = 0
\end{displaymath}
we substitute for the unknown quantity~$x$ its value derived
from the equations,
\begin{displaymath}
x = a'x + bx', \quad x' = ax + b'x',
\end{displaymath}
we find
\begin{displaymath}
(abx + abx' = 0) = (ab = 0),
\end{displaymath}
that is to say, the resultant of the elimination of~$x$ which, as
we have seen, is a consequence of the equation itself. Thus we are
assured that the value of~$x$ verifies this equation. Therefore we
can, with \author{}{Voigt,} define the solution of an equation as
that value which, when substituted for~$x$ in the equation,
reduces it to the resultant of the elimination of~$x$.
\emph{Special Case}.---When the equation contains a term
independent of~$x$, \emph{i.e.}, when it is of the form
\begin{displaymath}
ax + bx' + c = 0
\end{displaymath}
it is equivalent to
\begin{displaymath}
(a+c)x + (b+c)x' = 0,
\end{displaymath}
and the resultant of elimination is
\begin{displaymath}
(a + c) (b + c) = ab + c = 0,
\end{displaymath}
whence we derive this practical rule: To obtain the resultant of
the elimination of~$x$ in this case, it is sufficient to equate to
zero the product of the coefficients of~$x$ and~$x'$, and add to
them the term independent of~$x$.
\section{The Case of Indetermination}\label{ch:32}
Just as the resultant
\begin{displaymath}
ab = 0
\end{displaymath}
corresponds to the case when the equation is possible, so the
equality
\begin{displaymath}
a + b =0
\end{displaymath}
corresponds to the case of \emph{absolute indetermination}. For in
this case the equation both of whose coefficients are zero
$(a = 0)$, $(b = 0)$, is reduced to an identity $(0 = 0)$, and
therefore is ``identically'' verified, whatever the value of~$x$ may
be; it does not determine the value of~$x$ at all, since the
double inclusion
\begin{displaymath}
b < x < a'
\end{displaymath}
then becomes
\begin{displaymath}
0 < x < 1
\end{displaymath}
which does not limit in any way the variability of~$x$. In this
case we say that the equation is \emph{ indeterminate}.
We shall reach the same conclusion if we observe that
$(a + b)$ is the superior limit of the function $ax + bx$ and that,
if this limit is 0, the function is necessarily zero for all
values of $x$,
\begin{displaymath}
(ax + bx' < a + b) (a + b = 0) < (ax + bx' = 0).
\end{displaymath}
\emph{Special Case}.---When the equation contains a term independent of~$x$,
\begin{displaymath}
ax + bx' + c = 0,
\end{displaymath}
the condition of absolute indetermination takes the form
\begin{displaymath}
a + b + c = 0.
\end{displaymath}
For
\begin{align*}
ax + bx' + c &= (a + c)x + (b + c)x', \\
(a + c) + (b + c) &= a + b + c = 0.
\end{align*}
\section{Sums and Products of Functions}\label{ch:33}
It is desirable
at this point to introduce a notation borrowed from mathematics,
which is very useful in the algebra of logic. Let $f(x)$
be an expression containing one variable; suppose that the
class of all the possible values of $x$ is determined; then the
class of all the values which the function $f(x)$ can assume
in consequence will also be determined. Their sum will be
represented by $\sum_{x} f(x)$ and their product by $\prod_{x}f(x)$ This
is a new notation and not a new notion, for it is merely the
idea of sum and product applied to the values of a function.
When the symbols $\sum$ and $\prod$ are applied to propositions,
they assume an interesting significance:
\begin{displaymath}
\prod_{x} [f(x) = 0]
\end{displaymath}
means that $f(x) = 0$ is true for \emph{every} value of $x$; and
\begin{displaymath}
\sum _{x} [f(x) = 0]
\end{displaymath}
that $f(x) = 0$ is true for \emph{some} value of $x$. For, in
order that a product may be equal to~1 (\emph{i.e.}, be true), all
its factors must be equal to~1 (\emph{i.e.}, be true); but, in
order that a sum may be equal to~1 (\emph{i.e.}, be true), it is
sufficient that only one of its summands be equal to $1$
(\emph{i.e.}, be true). Thus we have a means of expressing
universal and particular propositions when they are applied to
variables, especially those in the form: ``For every value of~$x$
such and such a proposition is true'', and ``For some value
of~$x$, such and such a proposition is true'', etc.
For instance, the equivalence
\begin{displaymath}
(a = b) = (ac = bc) (a + c = b + c)
\end{displaymath}
is somewhat paradoxical because the second member contains
a term~($c$) which does not appear in the first. This equivalence
is independent of~$c$, so that we can write it as follows,
considering~$c$ as a variable~$x$
\begin{displaymath}
\prod_{x} [(a= b) = (ax = bx) (a + x = b + x)],
\end{displaymath}
or, the first member being independent of~$x$,
\begin{displaymath}
(a = b) = \prod_{x} [(ax = bx) (a + x = b + x)].
\end{displaymath}
In general, when a proposition contains a variable term,
great care is necessary to distinguish the case in which it is
true for \emph{every} value of the variable, from the case in which
it is true only for some value of the variable.%
\footnote{This is the same as the distinction made in mathematics between
\emph{identities} and \emph{equations}, except that an equation may not be verified by
any value of the variable.} This is the
purpose that the symbols $\prod$ and $\sum$ serve.
Thus when we say for instance that the equation
\begin{displaymath}
ax + bx' = 0
\end{displaymath}
is possible, we are stating that it can be verified by some
value of~$x$; that is to say,
\begin{displaymath}
\sum_{x} (ax + bx' = 0),
\end{displaymath}
and, since the necessary and sufficient condition%
\index{Condition!Necessary and sufficient} for this is that the resultant
$(ab = 0)$ is true, we must write
\begin{displaymath}
\sum_{x} (ax + bx = 0) = (ab = 0),
\end{displaymath}
although we have only the implication
\begin{displaymath}
(ax + bx = 0) < (ab = 0).
\end{displaymath}
On the other hand, the necessary and sufficient condition%
\index{Condition!Necessary and sufficient} for the equation to be verified
by every value of~$x$ is that
\begin{displaymath}
a + b = 0.
\end{displaymath}
\emph{Demonstration}.---1. The condition is sufficient, for if
\begin{displaymath}
(a + b = 0) = (a = 0) (b = 0),
\end{displaymath}
we obviously have
\begin{displaymath}
ax + bx' = 0
\end{displaymath}
whatever the value of $x$; that is to say,
\begin{displaymath}
\prod_{x} (ax+ bx' = 0).
\end{displaymath}
2. The condition is necessary, for if
\begin{displaymath}
\prod_{x} (ax + bx') = 0,
\end{displaymath}
the equation is true, in particular, for the value $x = a$; hence
\begin{displaymath}
a + b = 0.
\end{displaymath}
Therefore the equivalence
\begin{displaymath}
\prod_{x} (ax + bx' = 0) = (a + b = 0)
\end{displaymath}
is proved.%
\footnote{\author{}{Eugen Müller,} \emph{op.~cit}.} In this
instance, the equation reduces to an identity: its first member is
``identically'' null.
\section{The Expression of an Inclusion by Means of an
Indeterminate}\label{ch:34}
The foregoing notation is indispensable in
almost every case where variables or indeterminates occur in
one member of an equivalence, which are not present in the
other. For instance, certain authors predicate the two following
equivalences
\begin{displaymath}
(a < b) = (a = bu) = (a + v = b),
\end{displaymath}
in which $u$, $v$ are two ``indeterminates''. Now, each of the
two equalities has the inclusion ($a < b$) as its consequence,
as we may assure ourselves by eliminating $u$ and $v$ respectively
from the following equalities:
\begin{displaymath}
\tag*{1.} [a (b' + u') + a'bu = 0] = [(ab' + a'b) u + au' = 0].
\end{displaymath}
Resultant:
\begin{displaymath}
[(ab' + a'b) a = 0] = (ab' = 0) = (a < b).
\end{displaymath}
\begin{displaymath}
\tag*{2.} [(a + v) b' + a'bv = 0] = [b'v + (ab' + a'b) v' = 0].
\end{displaymath}
Resultant:
\begin{displaymath}
[b' (ab' + a'b) = 0] = (ab' = 0) + (a < b).
\end{displaymath}
But we cannot say, conversely, that the inclusion implies
the two equalities for \emph{any values} of~$u$ and~$v$; and, in fact, we
restrict ourselves to the proof that this implication holds for
some value of~$u$ and~$v$, namely for the particular values
\begin{displaymath}
u = a, \quad b = v;
\end{displaymath}
for we have
\begin{displaymath}
(a = ab) = (a < b) = (a + b = b).
\end{displaymath}
But we cannot conclude, from the fact that the implication
(and therefore also the equivalence) is true for \emph{some} value of
the indeterminates, that it is true for \emph{all}; in particular, it is
not true for the values
\begin{displaymath}
u = 1, \quad v = 0,
\end{displaymath}
for then $(a = bu)$ and $(a + v = b)$ become $(a=b)$, which
obviously asserts more than the given inclusion $(a < b)$.%
\footnote{Likewise if we make
\begin{displaymath}
u = 0, \quad v = 1,
\end{displaymath}
we obtain the equalities
\begin{displaymath}
(a = 0), \quad (b = 1),
\end{displaymath}
which assert still more than the given inclusion.}
Therefore we can write only the equivalences
\begin{displaymath}
(a < b) = \sum_u (a = bu) = \sum_v (a + v = b),
\end{displaymath}
but the three expressions
\begin{displaymath}
(a < b), \quad \prod_u (a = bu), \quad \prod_v (a + v = b)
\end{displaymath}
are not equivalent.%
\footnote{According to the remark in the preceding note, it is clear that we have
\begin{displaymath}
\prod_v (a = bu) = (a = b = 0), \quad \prod_v (a + v = b) = (a = b = 1),
\end{displaymath}
since the equalities affected by the sign $\prod$ may be likewise verified
by the values
\begin{displaymath}
u = 0, \quad u = 1 \quad \text{and} \quad v = 0, \quad v = 1.
\end{displaymath}
If we wish to know within what limits the indeterminates $u$ and $v$ are
variable, it is sufficient to solve with respect to them the equations
\begin{displaymath}
(a < b) = (a = bu), \quad (a < b) = (a + v = b),
\end{displaymath}
or
\begin{displaymath}
(ab' = a'bu + ab' + au', \quad ab' = ab' + b'v + a'bv',
\end{displaymath}
or
\begin{displaymath}
a'bu + abu' = 0, \quad a'b'v + a'bv' = 0,
\end{displaymath}
from which (by a formula to be demonstrated later on) we derive the
solutions
\begin{displaymath}
u = ab + w (a + b'), \quad v = a'b + w (a + b),
\end{displaymath}
or simply
\begin{displaymath}
u = ab + wb', \quad v = a'b + wa,
\end{displaymath}
$w$ being absolutely indeterminate. We would arrive at these solutions
simply by asking: By what term must we multiply $b$ in order to obtain
$a$? By a term which contains $ab$ plus any part of $b'$. What term must
we add to $a$ in order to obtain $b$? A term which contains $a'b$ plus
any part of $a$. In short, $u$ can vary between $ab$ and $a + b'$, $v$ between
$a'b$ and $a + b$.}
\section{The Expression of a Double Inclusion by Means
of an Indeterminate}\label{ch:35}
\textsc{Theorem}. \emph{The double inclusion
\begin{displaymath}
b < x < a
\end{displaymath}
is equivalent to the equality $x = au + bu'$ together with the
condition ($b < a$), $u$ being a term absolutely indeterminate.}
\emph{Demonstration}.---Let us develop an equality in question,
\begin{align*}
x(a'u + b'u') + x'(au + bu') &= 0, \\
(a'x + ax')u + (b'x + bx')u' &= 0.
\end{align*}
Eliminating $u$ from it,
\begin{displaymath}
a'b'x + abx' = 0.
\end{displaymath}
This equality is equivalent to the double inclusion
\begin{displaymath}
ab < x < a + b.
\end{displaymath}
But, by hypothesis, we have
\begin{displaymath}
(b < a) = (ab = b) = (a + b = a).
\end{displaymath}
The double inclusion is therefore reduced to
\begin{displaymath}
b < x < a.
\end{displaymath}
So, whatever the value of~$u$, the equality under consideration
involves the double inclusion. Conversely, the double inclusion
involves the equality, whatever the value of $x$ may be,
for it is equivalent to
\begin{displaymath}
a'x + bx' = 0,
\end{displaymath}
and then the equality is simplified and reduced to
\begin{displaymath}
ax'u + b'xu' = 0.
\end{displaymath}
We can always derive from this the value of $u$ in terms
of $x$, for the resultant $(ab'xx' = 0)$ is identically verified.
The solution is given by the double inclusion
\begin{displaymath}
b' x < u < a' + x.
\end{displaymath}
\emph{Remark}.---There is no contradiction between this result,
which shows that the value of~$u$ lies between certain limits,
and the previous assertion that~$u$ is absolutely indeterminate;
for the latter assumes that~$x$ is any value that will verify the
double inclusion, while when we evaluate~$u$ in terms of~$x$ the
value of $x$ is supposed to be determinate, and it is with
respect to this particular value of~$x$ that the value of~$u$ is
subjected to limits.%
\footnote{Moreover, if we substitute for~$x$ its inferior limit~$b$ in the inferior
limit of~$u$, this limit becomes $bb' = 0$; and, if we substitute for~$x$ its
superior limit~$a$ in the superior limit of~$u$, this limit becomes $a + a' = 1$.}
In order that the value of~$u$ should be completely determined,
it is necessary and sufficient that we should have
\begin{displaymath}
b'x = a' + x,
\end{displaymath}
that is to say,
\begin{displaymath}
b' xax' + (b + x') (a'+ x) = 0
\end{displaymath}
or
\begin{displaymath}
bx + a' x' =0.
\end{displaymath}
Now, by hypothesis, we already have
\begin{displaymath}
a' x + bx' = 0.
\end{displaymath}
If we combine these two equalities, we find
\begin{displaymath}
(a + b = 0) = (a = 1) (b = 0).
\end{displaymath}
This is the case when the value of~$x$ is absolutely indeterminate,
since it lies between the limits~0 and~1.
In this case we have
\begin{displaymath}
u = b'x = a + x = x.
\end{displaymath}
In order that the value of $u$ be absolutely indeterminate,
it is necessary and sufficient that we have at the same time
\begin{displaymath}
b'x = 0, \quad a'+x = 1,
\end{displaymath}
or
\begin{displaymath}
b'x + ax' = 0,
\end{displaymath}
that is
\begin{displaymath}
a < x < b.
\end{displaymath}
Now we already have, by hypothesis,
\begin{displaymath}
b < x < a;
\end{displaymath}
so we may infer
\begin{displaymath}
b = x = a.
\end{displaymath}
This is the case in which the value of $x$ is completely
determinate.
\section{Solution of an Equation Involving One Unknown
Quantity}\label{ch:36}
The solution of the equation
\begin{displaymath}
ax + bx' = 0
\end{displaymath}
may be expressed in the form
\begin{displaymath}
x = a'u + bu',
\end{displaymath}
$u$~being an indeterminate, on condition that the resultant of
the equation be verified; for we can prove that this equality
implies the equality
\begin{displaymath}
ab'x + a'bx' = 0,
\end{displaymath}
which is equivalent to the double inclusion
\begin{displaymath}
a'b < x < a' + b.
\end{displaymath}
Now, by hypothesis, we have
\begin{displaymath}
(ab = 0) = (a'b = b) = (a' + b = a').
\end{displaymath}
Therefore, in this hypothesis, the proposed solution implies
the double inclusion
\begin{displaymath}
b < x < a';
\end{displaymath}
which is equivalent to the given equation.
\emph{Remark}.---In the same hypothesis in which we have
\begin{displaymath}
(ab=0)=(b < a'),
\end{displaymath}
we can always put this solution in the simpler but less symmetrical
forms
\begin{displaymath}
x = b + a'u, \quad x = a'(b+u).
\end{displaymath}
For
1. We have identically
\begin{displaymath}
b = bu + bu'.
\end{displaymath}
Now
\begin{displaymath}
(b < a') < (bu < a'u).
\end{displaymath}
Therefore
\begin{displaymath}
(x = bu' + a'u) = (x = b + a'u).
\end{displaymath}
2. Let us now demonstrate the formula
\begin{displaymath}
x = a'b + a'u.
\end{displaymath}
Now
\begin{displaymath}
a'b = b.
\end{displaymath}
Therefore
\begin{displaymath}
x = b + a'u
\end{displaymath}
which may be reduced to the preceding form.
Again, we can put the same solution in the form
\begin{displaymath}
x = a'b + u(ab + a'b'),
\end{displaymath}
which follows from the equation put in the form
\begin{displaymath}
ab'x + a'bx' = 0,
\end{displaymath}
if we note that
\begin{displaymath}
a'+ b = ab + a'b + a'b'
\end{displaymath}
and that
\begin{displaymath}
ua'b < a'b.
\end{displaymath}
This last form is needlessly complicated, since, by hypothesis,
\begin{displaymath}
ab = 0.
\end{displaymath}
Therefore there remains
\begin{displaymath}
x = a'b + ua'b'
\end{displaymath}
which again is equivalent to
\begin{displaymath}
x = b + ua',
\end{displaymath}
since
\begin{displaymath}
a'b = b \quad\text{and}\quad a' = a'b + a'b'.
\end{displaymath}
Whatever form we give to the solution, the parameter~$u$ in it is
absolutely indeterminate, \emph{i.e.}, it can receive all possible
values, including~0 and~1; for when $u = 0$ we have
\begin{displaymath}
x = b,
\end{displaymath}
and when $u = 1$ we have
\begin{displaymath}
x = a',
\end{displaymath}
and these are the two extreme values of~$x$.
Now we understand that~$x$ is determinate in the particular
case in which $a' = b$, and that, on the other hand, it is
absolutely indeterminate when
\begin{displaymath}
b = 0, \quad a' = 1, \quad (\text{or } a = 0).
\end{displaymath}
Summing up, the formula
\begin{displaymath}
x = a'u + bu'
\end{displaymath}
replaces the ``limited'' variable~$x$ (lying between the limits~$a'$
and~$b$) by the ``unlimited'' variable~$u$ which can receive all
possible values, including~0 and~1.
\emph{Remark}.%
\footnote{\author{}{Poretsky.} \emph{Sept lois}, Chaps.~XXXIII and
XXXIV.}---The formula of solution
\begin{displaymath}
x = a'x + bx'
\end{displaymath}
is indeed equivalent to the given equation, but not so the
formula of solution
\begin{displaymath}
x = a'u + bu'
\end{displaymath}
as a function of the indeterminate~$u$. For if we develop the
latter we find
\begin{displaymath}
ab'x + a'bx' + ab(xu + x'u') + a'b'(xu' + x'u) = 0,
\end{displaymath}
and if we compare it with the developed equation
\begin{displaymath}
ab + ab'x + a'bx' = 0,
\end{displaymath}
we ascertain that it contains, besides the solution, the equality
\begin{displaymath}
ab(xu' + x'u) = 0,
\end{displaymath}
and lacks of the same solution the equality
\begin{displaymath}
a'b'(xu' + x'u) = 0.
\end{displaymath}
Moreover these two terms disappear if we make
\begin{displaymath}
u = x
\end{displaymath}
and this reduces the formula to
\begin{displaymath}
x = a'x + bx'.
\end{displaymath}
From this remark, \author{}{Poretsky} concluded that, in general, the
solution of an equation is neither a consequence nor a cause
of the equation. It is a cause of it in the particular case in which
\begin{displaymath}
ab = 0,
\end{displaymath}
and it is a consequence of it in the particular case in which
\begin{displaymath}
(a'b' = 0) = (a + b = i).
\end{displaymath}
But if $ab$ is not equal to~0, the equation is unsolvable and
the formula of solution absurd, which fact explains the
preceding paradox. If we have at the same time
\begin{displaymath}
ab = 0 \quad\text{and}\quad a + b = 1,
\end{displaymath}
the solution is both consequence and cause at the same time,
that is to say, it is equivalent to the equation. For when
$a = b$ the equation is determinate and has only the one
solution
\begin{displaymath}
x = a' = b.
\end{displaymath}
Thus, whenever an equation is solvable, its solution is one of its
causes; and, in fact, the problem consists in finding a value of
$x$ which will verify it, \emph{i.e.}, which is a cause of it.
To sum up, we have the following equivalence:
\begin{displaymath}
(ax + bx' = 0) = (ab = 0) \sum_u (x = a'u + bu')
\end{displaymath}
which includes the following implications:
\begin{gather*}
(ax + bx' = 0) < (ab = 0), \\
(ax + bx' = 0) < \sum_u (x = a'u + bu'), \\
(ab = 0) \sum_u (x = a'u + bu') < (ax + bx' = 0).
\end{gather*}
\section{Elimination of Several Unknown Quantities}\label{ch:37}
We shall now consider an equation involving several unknown
quantities and suppose it reduced to the normal form, \emph{i.e.},
its first member developed with respect to the unknown quantities,
and its second member zero. Let us first concern ourselves with
the problem of elimination. We can eliminate the unknown
quantities either one by one or all at once.
For instance, let
\begin{equation}\label{eq:phi}
\begin{split}
\phi(x,y,z) &= axyz + bxyz' + cxy'z + dxy'z'\\
&+ fx'yz + gx'yz' + hx'y'z + kx'y'z' = 0\\
\end{split}
\end{equation}
be an equation involving three unknown quantities.
We can eliminate $z$ by considering it as the only unknown
quantity, and we obtain as resultant
\begin{displaymath}
(axy + cxy' + fx'y + hx'y') (bxy + dxy' + gx'y + kx'y') = 0
\end{displaymath}
or
\begin{equation}\label{eq:phi2}
abxy + cdxy' + fgx'y + hkx'y' = 0.
\end{equation}
If equation (\ref{eq:phi}) is possible, equation (\ref{eq:phi2}) is possible as well;
that is, it is verified by some values of~$x$ and~$y$. Accordingly
we can eliminate~$y$ from the equation by considering it as
the only unknown quantity, and we obtain as resultant
\begin{displaymath}
(abx + fgx') (cdx + hkx') = 0
\end{displaymath}
or
\begin{equation}\label{eq:phi3}
abcdx + fghkx' = 0.
\end{equation}
If equation~(\ref{eq:phi}) is possible, equation~(\ref{eq:phi3}) is also possible;.
that is, it is verified by some values of~$x$. Hence we can
eliminate~$x$ from it and obtain as the final resultant,
\begin{displaymath}
abcd \cdot fghk = 0
\end{displaymath}
which is a consequence of~(\ref{eq:phi}), independent of the unknown
quantities. It is evident, by the principle of symmetry, that
the same resultant would be obtained if we were to eliminate
the unknown quantities in a different order. Moreover this
result might have been foreseen, for since we have (\S\ref{ch:28})
\begin{displaymath}
abcdfghk < \phi(x,y,z),
\end{displaymath}
$\phi(x,y,z)$ can vanish only if the product of its coefficients
is zero:
\begin{displaymath}
\left[\phi(x,y,z) = 0\right] < (abcdfghk = 0).
\end{displaymath}
Hence we can eliminate all the unknown quantities at once
by equating to 0 the product of the coefficients of the
function developed with respect to all these unknown quantities.
We can also eliminate some only of the unknown quantities at one
time. To do this, it is sufficient to develop the first member
with respect to these unknown quantities and to equate the product
of the coefficients of this development to~0. This product will
generally contain the other unknown quantities. Thus the resultant
of the elimination of~$z$ alone, as we have seen, is
\begin{displaymath}
abxy + cdxy' + fgx'y + hkx'y' = 0
\end{displaymath}
and the resultant of the elimination of~$y$ and~$z$ is
\begin{displaymath}
abcdx + fghkx' = 0.
\end{displaymath}
These partial resultants can be obtained by means of the
following practical rule: Form the constituents relating to the
unknown quantities to be retained; give each of them, for a
coefficient, the product of the coefficients of the constituents
of the general development of which it is a factor, and equate
the sum to~0.
\section{Theorem Concerning the Values of a Function}\label{ch:38}
\emph{All the values which can be assumed by a function of any number
of variables $f(x, y, z \ldots)$ are given by the formula
\begin{displaymath}
abc \ldots k + u(a + b + c + \ldots + k),
\end{displaymath}
in which $u$ is absolutely indeterminate, and $a, b, c \ldots, k$ are
the coefficients of the development of~$f$.}
\emph{Demonstration}.---It is sufficient to prove that in the equality
\begin{displaymath}
f(x, y, z \ldots) = abc \ldots k + u(a + b + c + \ldots + k)
\end{displaymath}
$u$ can assume all possible values, that is to say, that this
equality, considered as an equation in terms of $u$, is indeterminate.
In the first place, for the sake of greater homogeneity, we
may put the second member in the form
\begin{displaymath}
u'abc \ldots k + u(a + b + c + \ldots + k),
\end{displaymath}
for
\begin{displaymath}
abc \ldots k = uabc \ldots k + u'abc \ldots k,
\end{displaymath}
and
\begin{displaymath}
uabc \ldots k < u(a + b + c + \ldots + k).
\end{displaymath}
Reducing the second member to~0 (assuming there are
only three variables $x, y, z$)
\begin{displaymath}
\begin{split}
(axyz &+ bxyz' + cxy'z + \ldots + kx'y'z')\\
&\times [ua'b'c' \ldots k' + u'(a' + b' + c' + \ldots + k')]\\
&+ (a'xyz + b'xyz' + c'xy'z + \ldots + k'x'y'z')\\
&\times [u(a + b + c + \ldots + k) + u'abc \ldots k] = 0,\\
\end{split}
\end{displaymath}
or more simply
\begin{displaymath}
\begin{split}
u(a &+ b + c + \ldots + k) (a'xyz + b'xyz + c'xy'z + \ldots + k'x'y'z')\\
&+ u'(a' + b' + c' + \ldots+ k') (axyz + bxyz +\ldots + kx'y z) = 0.\\
\end{split}
\end{displaymath}
If we eliminate all the variables $x, y, z$, but not the indeterminate
$u$, we get the resultant
\begin{displaymath}
\begin{split}
u(a &+ b + c + \ldots + k) a'b'c' \ldots k'\\
&+ u'(a' + b' + c'+\ldots + k')abc \ldots k = 0.\\
\end{split}
\end{displaymath}
Now the two coefficients of~$u$ and~$u'$ are identically zero; it
follows that~$u$ is absolutely indeterminate, which was to be
proved.\footnote{\author{Whitehead, A.~N.}{Whitehead,}
\emph{Universal Algebra}, Vol.~I, \S 33 (4).}
From this theorem follows the very important consequence
that a function of any number of variables can be changed
into a function of a single variable without diminishing or
altering its ``variability''.
\emph{Corollary}.---A function of any number of variables can
become equal to either of its limits.
For, if this function is expressed in the equivalent form
\begin{displaymath}
abc \ldots k + u(a + b + c + \ldots + k),
\end{displaymath}
it will be equal to its minimum $(abc \ldots k)$ when $u = 0$, and
to its maximum $(a + b + c + \ldots + k)$ when $u = 1$.
Moreover we can verify this proposition on the primitive
form of the function by giving suitable values to the
variables.
Thus a function can assume all values comprised between
its two limits, including the limits themselves. Consequently,
it is absolutely indeterminate when
\begin{displaymath}
a b c \ldots k = 0 \quad\text{and}\quad a + b + c + \ldots + k = 1
\end{displaymath}
at the same time, or
\begin{displaymath}
a b c \ldots k = 0 = a' b' c' \ldots k'.
\end{displaymath}
\section{Conditions of Impossibility and Indetermination}\label{ch:39}
The preceding theorem enables us to find the conditions
under which an equation of several unknown quantities is
impossible or indeterminate. Let $f(x, y, z \ldots)$ be the first
member supposed to be developed, and $a, b, c \ldots, k$ its
coefficients. The necessary and sufficient condition for the
equation to be possible is
\begin{displaymath}
abc \ldots k = 0.
\end{displaymath}
For, (1) if~$f$ vanishes for some value of the unknowns,
its inferior limit $abc \ldots k$ must be zero; (2) if $abc \ldots k$ is zero,
$f$~may become equal to it, and therefore may vanish for certain
values of the unknowns.
The necessary and sufficient condition%
\index{Condition!Necessary and sufficient} for the equation to
be indeterminate (identically verified)%
\index{Condition!of impossibility and indetermination} is
\begin{displaymath}
a + b + c \ldots + k = 0.
\end{displaymath}
For, (1) if $a + b + c + \ldots + k$ is zero, since it is the
superior limit of $f$, this function will always and necessarily
be zero; (2) if $f$ is zero for all values of the unknowns,
$a + b + c + \ldots + k$ will be zero, for it is one of the values
of $f$.
Summing up, therefore, we have the two equivalences
\begin{gather*}
\sum [f(x, y, z, \ldots) = 0] = (a b c \ldots k = 0).\\
\prod[f(x, y, z, \ldots) = 0] = (a + b + c \ldots + k = 0).
\end{gather*}
The equality $a b c \ldots k = 0$ is, as we know, the resultant
of the elimination of all the unknowns; it is the consequence
that can be derived from the equation (assumed to be verified)
independently of all the unknowns.
\section{Solution of Equations Containing Several Unknown
Quantities}\label{ch:40}
On the other hand, let us see how
we can solve an equation with respect to its various unknowns,
and, to this end, we shall limit ourselves to the
case of two unknowns
\begin{displaymath}
axy + bxy' + cx'y + dx'y' = 0.
\end{displaymath}
First solving with respect to $x$,
\begin{displaymath}
x = (a'y + b'y')x + (cy + dy')x'.
\end{displaymath}
The resultant of the elimination of $x$ is
\begin{displaymath}
acy + bdy' = 0.
\end{displaymath}
If the given equation is true, this resultant is true.
Now it is an equation involving $y$ only; solving it,
\begin{displaymath}
y = (a' + c')y + bdy'.
\end{displaymath}
Had we eliminated~$y$ first and then~$x$, we would have
obtained the solution
\begin{displaymath}
y = (a'x + c'x')y + (bx + dx')y'
\end{displaymath}
and the equation in~$x$
\begin{displaymath}
abx + cdx' = 0,
\end{displaymath}
whence the solution
\begin{displaymath}
x = (a' + b')x + cdx'.
\end{displaymath}
We see that the solution of an equation involving two
unknown quantities is not symmetrical with respect to these
unknowns; according to the order in which they were eliminated,
we have the solution
\begin{align*}
x &= (a'y + b'y')x + (cy + dy')x',\\
y &= (a' + c')y + bdy',
\end{align*}
or the solution
\begin{align*}
x &= (a' + b')x + cdx,\\
y &= (a'x + c'x')y + (bx + dx')y'.
\end{align*}
If we replace the terms $x, y$, in the second members by
indeterminates $u, v$, one of the unknowns will depend on only
one indeterminate, while the other will depend on two. We
shall have a symmetrical solution by combining the two formulas,
\begin{align*}
x &= (a' + b')u + cdu',\\
y &= (a' + c')v + bdv',
\end{align*}
but the two indeterminates~$u$ and $v$~will no longer be independent
of each other. For if we bring these solutions into
the given equation, it becomes
\begin{displaymath}
abcd + ab'c'uv + a'bd'uv' + a'cd'u'v + b'c'du'v' = 0
\end{displaymath}
or since, by hypothesis, the resultant $abcd = 0$ is verified,
\begin{displaymath}
ab'c'uv + a'bd'uv' + a'cdu'v + b'c'du'v' = 0.
\end{displaymath}
This is an ``equation of condition'' which the indeterminates
$u$~and $v$~must verify; it can always be verified, since its
resultant is identically true,
\begin{displaymath}
ab'c' \cdot a'bd' \cdot a'cd' \cdot b'c'd = aa' \cdot bb' \cdot cc' \cdot dd' = 0,
\end{displaymath}
but it is not verified by any pair of values attributed to~$u$
and~$v$.
Some general symmetrical solutions, \emph{i.e.}, symmetrical
solutions in which the unknowns are expressed in terms of several
independent indeterminates, can however be found.
This problem has been treated by \author{}{Schröder}%
\footnote{\emph{Algebra der Logik}, Vol.~I, \S 24.},
by \author{Whitehead, A.~N.}{Whitehead}%
\footnote{\emph{Universal Algebra}, Vol.~I, \S\S 35--37.}
and by \author{}{Johnson.}%
\footnote{``Sur la théorie des égalités logiques'', \emph{Bibl. du Cong. intern. de Phil.},
Vol. III, p.~185 (Paris, 1901).}
This investigation has only a purely technical interest; for,
from the practical point of view, we either wish to eliminate
one or more unknown quantities (or even all), or else we seek
to solve the equation with respect to one particular unknown.
In the first case, we develop the first member with respect
to the unknowns to be eliminated and equate the product of
its coefficients to~0. In the second case we develop with
respect to the unknown that is to be extricated and apply
the formula for the solution of the equation of one unknown
quantity. If it is desired to have the solution in terms of
some unknown quantities or in terms of the known only, the
other unknowns (or all the unknowns) must first be eliminated
before performing the solution.
\section{The Problem of Boole}\label{ch:41}\index{Boole!Problem of|(}
According to \author{}{Boole} the most general problem of the algebra
of logic is the following\footnote{\emph{Laws of Thought}, Chap.~IX,
\S 8.}:
Given any equation (which is assumed to be possible)
\begin{displaymath}
f(x, y, z,\ldots) = 0,
\end{displaymath}
and, on the other hand, the expression of a term~$t$ in terms
of the variables contained in the preceding equation
\begin{displaymath}
t = \varphi(x,y,z,\ldots)
\end{displaymath}
to determine the expression of $t$ in terms of the constants
contained in $f$ and in $\varphi$.
Suppose $f$ and $\varphi$ developed with respect to the variables
$x, y, z \ldots$ and let $p_1, p_2, p_3, \ldots$ be their constituents:
\begin{align*}
f(x, y, z, \ldots) &= A p_1 + B p_2 + C p_3 + \ldots,\\
\phi (x, y, z, \ldots) &= ap_1 + bp_2 + cp_3 + \ldots.
\end{align*}
Then reduce the equation which expresses~$t$ so that its
second member will be~0:
\begin{displaymath}
\begin{split}
(t \phi' + t' \phi = 0) &= [(a' p_1 + b' p_2 + c' p_3 + \ldots) t\\
&\qquad + (a p_1 + b p_2 + c p_3 + \ldots) t' = 0].\\
\end{split}
\end{displaymath}
Combining the two equations into a single equation and
developing it with respect to~$t$:
\begin{multline*}
[(A + a') p_1 + (B + b') p_2 + (C+ c') p_3 + \ldots] t\\
+ [(A + a) p_1 + (B + b) p_2 + (C + c) p_3 + \ldots]t' = 0.
\end{multline*}
This is the equation which gives the desired expression
of $t$. Eliminating $t$, we obtain the resultant
\begin{displaymath}
A p_1 + B p_2 + C p_3 + \ldots = 0,
\end{displaymath}
as we might expect. If, on the other hand, we wish to eliminate
$x, y, z,\ldots$ (\emph{i.e.}, the constituents $p_1 , p_2 , p_3
\ldots$), we put the equation in the form
\begin{displaymath}
(A + a't + at')p_1 + (B + b't + bt')p_2 + (C + c't + ct') p_3 + \ldots = 0,
\end{displaymath}
and the resultant will be
\begin{displaymath}
(A + a't + at') (B + b't + bt')(C + c't + ct')\ldots = 0,
\end{displaymath}
an equation that contains only the unknown quantity~$t$ and
the constants of the problem (the coefficients of~$f$ and of~$\varphi$).
From this may be derived the expression of~$t$ in terms of
these constants. Developing the first member of this equation
\begin{displaymath}
(A + a') (B + b') (C + c') \ldots \times t + (A + a) (B + b) (C + c) \ldots \times t' = 0.
\end{displaymath}
The solution is
\begin{displaymath}
t = (A + a) (B + b) (C + c) \ldots + u(A'a + B'b + C'c + \ldots).
\end{displaymath}
The resultant is verified by hypothesis since it is
\begin{displaymath}
ABC \ldots = 0,
\end{displaymath}
which is the resultant of the given equation
\begin{displaymath}
f(x, y, z, \ldots) = 0.
\end{displaymath}
We can see how this equation contributes to restrict the
variability of $t$. Since $t$ was defined only by the function $\varphi$,
it was determined by the double inclusion
\begin{displaymath}
abc\ldots < t < a + b + c + \ldots.
\end{displaymath}
Now that we take into account the condition $f=0$, $t$~is
determined by the double inclusion
\begin{displaymath}
(A + a) (B + b) (C + c) \ldots < t < (A'a + B'b + C'c + \ldots).%
\footnote{\author{Whitehead, A.~N.}{Whitehead,} \emph{Universal Algebra}, p.~63.}
\end{displaymath}
The inferior limit can only have increased and the superior
limit diminished, for
\begin{displaymath}
abc \ldots < (A + a) (B + b) (C + c) \ldots
\end{displaymath}
and
\begin{displaymath}
A'a + B'b + C'c \ldots < a + b + c \ldots.
\end{displaymath}
The limits do not change if $A = B = C = \ldots = 0$, that
is, if the equation $f = 0$ is reduced to an identity, and this
was evident \emph{a priori}.\index{Boole!Problem of|)}
\section{The Method of Poretsky}\label{ch:42}
The method of \author{}{Boole} and \author{}{Schröder} which we
have heretofore discussed is clearly inspired by the example of
ordinary algebra, and it is summed up in two processes analogous
to those of algebra, namely the solution of equations with
reference to unknown quantities and elimination of the unknowns.
Of these processes the second is much the more important from a
logical point of view, and \author{}{Boole} was even on the point
of considering deduction as essentially consisting in the
\emph{elimination of middle terms}. This notion, which is too
restricted, was suggested by the example of the syllogism, in
which the conclusion results from the elimination of the middle
term, and which for a long time was wrongly considered as the only
type of mediate deduction.\footnote{In fact, the fundamental
formula of elimination
\begin{displaymath}
(ax + bx' = 0) < (ab = 0)
\end{displaymath}
is, as we have seen, only another form and a consequence of the
principle of the syllogism
\begin{displaymath}
(b < x < a') < (b < a').
\end{displaymath}}
However this may be, \author{}{Boole} and \author{}{Schröder} have
exaggerated the analogy between the algebra of logic and ordinary
algebra. In logic, the distinction of known and unknown terms is
artificial and almost useless. All the terms are---in principle at
least---known, and it is simply a question, certain relations
between them being given, of deducing new relations (unknown or
not explicitly known) from these known relations. This is the
purpose of \author{}{Poretsky's} method which we shall now
expound. It may be summed up in three
laws, the \emph{law of forms}, the \emph{law of consequences}%
\index{Consequences!Law of}\index{Law of Consequences} and the
\emph{law of causes}.
\section{The Law of Forms}\label{ch:43}
This law answers the following
problem: An equality being given, to find for any term
(simple or complex) a determination equivalent to this equality.
In other words, the question is to find all the \emph{forms}
equivalent to this equality, any term at all being given as
its first member.
We know that any equality can be reduced to a form in which the
second member is~0 or~1; \emph{i.e.}, to one of the two equivalent
forms
\begin{displaymath}
N = 0, \qquad N' = 1.
\end{displaymath}
The function~$N$ is what \author{}{Poretsky} calls the \emph{logical zero}
of the given equality; $N'$~is its logical \emph{whole.}%
\footnote{They are called ``logical'' to distinguish them from the
identical \emph{zero} and \emph{whole}, \emph{i.e.}, to indicate
that these two terms are not equal to 0 and 1 respectively except
by virtue of the data of the problem.}
Let~$U$ be any term; then the determination of~$U$:
\begin{displaymath}
U = N'U + NU'
\end{displaymath}
is equivalent to the proposed equality; for we know it is
equivalent to the equality
\begin{displaymath}
(NU + NU' = 0) = (N = 0).
\end{displaymath}
Let us recall the signification of the determination
\begin{displaymath}
U = N'U + NU'.
\end{displaymath}
It denotes that the term~$U$ is contained in~$N'$ and contains
$N$. This is easily understood, since, by hypothesis,
$N$~is equal to~0 and $N'$ to~1. Therefore we can formulate
the \emph{law of forms} in the following way:
\emph{To obtain all the forms equivalent to a given equality, it
is sufficient to express that any term contains the logical zero
of this equality and is contained in its logical whole.}
The number of forms of a given equality is unlimited; for
any term gives rise to a form, and to a form different from
the others, since it has a different first member. But if we
are limited to the universe of discourse determined by~$n$
simple terms, the number of forms becomes finite and determinate.
For, in this limited universe, there are $2^n$ constituents.
Now, all the terms in this universe that can be
conceived and defined are sums of some of these constituents.
Their number is, therefore, equal to the number
of combinations that can be made with $2^n$ constituents,
namely $2^{2^n}$ (including~0, the combination of~0 constituent,
and~1, the combination of all the constituents). This will
also be the number of different forms of any equality in the
universe in question.
\section{The Law of Consequences}\label{ch:44}
We shall now pass to the law of consequences.%
\index{Consequences!Law of}\index{Law of Consequences} Generalizing
the conception of \author{}{Boole,} who made deduction%
\index{Deduction} consist in the elimination of middle terms,
\author{}{Poretsky} makes it consist in the elimination of known terms
(\emph{connaissances}\index{connaissances@\emph{Connaissances}}). This
conception is explained and justified as follows.
All problems in which the data are expressed by logical
equalities or inclusions can be reduced to a single logical
equality by means of the formula%
\footnote{We employ capitals to denote complex terms (logical functions) in
contrast to simple terms denoted by small letters ($a, b, c, \ldots$)}
\begin{displaymath}
(A = 0)(B = 0) (C = 0) \ldots = (A + B + C \ldots = 0).
\end{displaymath}
In this logical equality, which sums up all the data of the
problem, we develop the first member with respect to all
the simple terms which appear in it (and not with respect
to the unknown quantities). Let $n$ be the number of simple
terms; then the number of the constituents of the development
of~1 is $2^n$. Let $m$ ($\leq 2^n$) be the number of those
constituents appearing in the first member of the equality.
All possible consequences of this equality (in the universe
of the $n$ terms in question) may be obtained by forming all
the additive combinations of these $m$~constituents, and equating
them to~0; and this is done in virtue of the formula
\begin{displaymath}
(A + B = 0) < (A = 0).
\end{displaymath}
We see that we pass from the equality to any one of its
consequences by suppressing some of the constituents in its first
member, which correspond to as many elementary equalities
(having~0 for second member), \emph{i.e.}, as many as there are
data in the problem. This is what is meant by ``eliminating the
known terms''.
The number of consequences that can be derived from an equality
(in the universe of~$n$ terms with respect to which it is
developed) is equal to the number of additive combinations that
may be formed with its $m$ constituents; \emph{i.e.}, $2^m$. This
number includes the combination of 0~constituents, which gives
rise to the identity $0 = 0$, and the combination of the
$m$~constituents, which reproduces the given equality.
Let us apply this method to the equation with one unknown
quantity
\begin{displaymath}
ax + bx' = 0.
\end{displaymath}
Developing it with respect to the \emph{three} terms $a, b, x$:
\begin{align*}
(abx &+ ab'x + abx' + a'bx' = 0) \\
&= [ab(x + x') + ab'x + a'bx' = 0] \\
&= (ab = 0) (ab'x = 0) (a'bx'=0).
\end{align*}
Thus we find, on the one hand, the resultant $ab = 0$,
and, on the other hand, two equalities which may be transformed
into the inclusions
\begin{displaymath}
x < a' + b, \qquad a'b < x.
\end{displaymath}
But by the resultant which is equivalent to $b < a'$, we have
\begin{displaymath}
a' + b' = a', \qquad a'b = b.
\end{displaymath}
This consequence may therefore be reduced to the double
inclusion
\begin{displaymath}
x < a', \qquad b < x,
\end{displaymath}
that is, to the known solution.
Let us apply the same method to the premises of the
syllogism
\begin{displaymath}
(a < b) (b < c).
\end{displaymath}
Reduce them to a single equality
\begin{displaymath}
(a < b) = (ab' = 0), \quad (b < c) = (bc' = 0), \quad (ab' + bc' = 0),
\end{displaymath}
and seek all of its consequences.
Developing with respect to the three terms $a, b, c$:
\begin{displaymath}
abc' + ab'c + ab'c' + a'bc' = 0.
\end{displaymath}
The consequences\index{Consequences!Sixteen} of this equality, which
contains four constituents, are~16 ($2^4$) in number as follows:
\begin{gather*}
\tag*{1.} (abc' = 0) = (ab < c);\\
\tag*{2.} (ab'c = 0) = (ac < b);\\
\tag*{3.} (ab'c'= 0) = (a < b + c);\\
\tag*{4.} (a'bc' = 0) = (b < a + c);\\
\tag*{5.} (abc' + ab'c = 0) = (a < bc + b'c');\\
\tag*{6.} (abc' + ab'c'= 0) = (ac' = 0) = (a < c).
\end{gather*}
This is the traditional conclusion of the syllogism.%
\footnote{It will be observed that this is the only consequence (except the
two extreme consequences [see the text below]) independent of b; therefore
it is the resultant of the elimination of that middle term.}
\begin{displaymath}
\tag*{7.} (abc' + a'bc' = 0) = (bc' = 0) = (b < c).
\end{displaymath}
This is the second premise.
\begin{displaymath}
\tag*{8.} (ab'c + ab'c' = 0) = (ab' = 0) = (a < b).
\end{displaymath}
This is the first premise.
\begin{gather*}
\tag*{9.} (ab'c + a'bc' = 0) = (ac < b < a + c);\\
\tag*{10.} (ab'c'+ a'bc' = 0) = (ab' + a'b < c);\\
\tag*{11.} (abc' + ab'c + ab'c' = 0) = (ab' + ac' = 0) = (a < bc);\\
\tag*{12.} (abc' + ab'c + a'bc' = 0) = (ab'c + bc' = 0) = (ac < b < c);\\
\tag*{13.} (abc' + ab'c' + a'bc' = 0) = (ac' + bc' = 0) = (a + b < c);\\
\tag*{14.} (ab'c + ab'c' + a'bc' = 0) = (ab' + a'bc' = 0) = (a < b < a + c).
\end{gather*}
The last two consequences (15 and 16) are those obtained by combining
0~constituent and by combining all; the first is the identity
\begin{displaymath}
\tag*{15.} 0 = 0,
\end{displaymath}
which confirms the paradoxical proposition that the true
(identity) is implied by any proposition (is a consequence
of it); the second is the given equality itself
\begin{displaymath}
\tag*{16.} ab' + bc' = 0,
\end{displaymath}
which is, in fact, its own consequence by virtue of the
principle of identity. These two consequences may be called
the ``extreme consequences'' of the proposed equality. If
we wish to exclude them, we must say that the number of
the consequences properly so called of an equality of~$m$
constituents is $2^m - 2$.
\section{The Law of Causes}\label{ch:45}\index{Causes!Law of|(}
The method of finding the consequences of a given equality
suggests directly the method of finding its \emph{causes}, namely,
the propositions of which it is the consequence. Since we pass
from the cause to the consequence by eliminating known terms,
\emph{i.e.}, by suppressing constituents, we will pass conversely
from the consequence to the cause by adjoining known terms,
\emph{i.e.}, by adding constituents to the given equality. Now,
the number of constituents that may be added to it, \emph{i.e.},
that do not already appear in it, is $2^n-m$. We will obtain all
the possible causes (in the universe of the $n$ terms under
consideration) by forming all the additive combinations of these
constituents, and adding them to the first member of the equality
in virtue of the general formula
\begin{displaymath}
(A + B = 0) < (A = 0),
\end{displaymath}
which means that the equality $(A = 0)$ has as its cause the
equality $(A + B = 0)$, in which $B$ is any term. The number
of causes thus obtained will be equal to the number of the
aforesaid combinations, or $2^{2n}-m$.
This method may be applied to the investigation of the
causes of the premises of the syllogism
\begin{displaymath}
(a < b) (b < c)
\end{displaymath}
which, as we have seen, is equivalent to the developed
equality
\begin{displaymath}
abc' + ab'c + ab'c' + a'bc' = 0.
\end{displaymath}
This equality contains four of the eight $(2^3)$ constituents
of the universe of three terms, the four others being
\begin{displaymath}
abc, a'bc, a'b'c, a'b'c'.
\end{displaymath}
The number of their combinations is 16 $(2^4)$, this is also
the number of the causes sought, which are:
\begin{gather}
\begin{split}
(abc &+ abc' + ab'c + ab'c' + a'bc' = 0)\\
&= (a + bc' = 0) = (a = 0) (b < c);
\end{split}\\
\begin{split}
(abc' &+ ab'c + ab'c' + a'bc + a'bc' = 0)\\
&= (abc'+ ab' + a'b = 0) = (ab < c) (a = b);
\end{split}\\
\begin{split}
(abc' &+ ab'c + ab'c' + a'bc' + a'b'c = 0)\\
&= (bc' + b'c + ab'c' = 0) = (b = c) (a < b + c);
\end{split}\\
\begin{split}
(abc' &+ ab'c + ab'c' + a'bc' + a'b'c' = 0)\\
&= (c' + ab' = 0) = (c=1) (a < b);
\end{split}\\
\begin{split}
(abc &+ abc' + ab'c + ab'c' + a'bc + a'bc' = 0)\\
&= (a + b = 0) = (a = 0) (b = 0);
\end{split}\\
\begin{split}
(abc &+ abc' + ab'c + ab'c' + a'bc' + a'b'c = 0)\\
&= (a + bc' + b'c = 0) = (a = 0) (b = c);
\end{split}\\
\begin{split}
(abc &+ abc' + ab'c + ab'c' + a'bc' + a'b'c' = 0)\\
&= (a + c' = 0) = (a = 0) (c = 1)\footnote{
It will be observed that this cause is the only one which is independent
of $b$; and indeed, in this case, whatever $b$ is, it will always
contain $a$ and will always be contained in $c$. Compare Cause 5, which
is independent of $c$, and Cause 10, which is independent of $a$.};\\
\end{split}\\
\begin{split}
(abc' &+ ab'c + ab'c' + a'bc + a'bc' + a'b'c = 0)\\
&= (ac' + a'c + ab'c + a'bc' = 0)\\
&= (a = c)(ac < b < a + c) = (a = b = c);
\end{split}\\
\begin{split}
(abc' &+ ab'c + ab'c' + a'bc + a'bc' + a'b'c = 0)\\
&= (c' + ab' + a'b = 0) = (c = 1)(a = b);
\end{split}\\
\begin{split}
(abc' &+ ab'c + ab'c' + a'bc' +a'b'c + a'b'c' = 0)\\
&= (b' + c' = 0) = (b = c = 1).
\end{split}
\end{gather}
Before going any further, it may be observed that when
the sum of certain constituents is equal to~0, the sum of
the rest is equal to~1. Consequently, instead of examining
the sum of seven constituents obtained by ignoring one of
the four missing constituents, we can examine the equalities
obtained by equating each of these constituents to~1:
\begin{alignat}{2}
(a'b'c' = 1) &= (a + b + c = 0) &&= (a = b = c = 0);\\
(a'b'c = 1) &= (a + b + c' = 0) &&= (a = b = 0) (c = 1);\\
(a'bc = 1) &= (a + b' + c' = 0) &&= (a = 0) (b = c = 1);\\
(abc = 1) & &&= (a = b = c = 1).
\end{alignat}
Note that the last four causes are based on the inclusion
\begin{displaymath}
0 < 1.
\end{displaymath}
The last two causes (\ref{eq:absurdity} and \ref{eq:equality}) are obtained either by
adding \emph{all} the missing constituents or by not adding any.
In the first case, the sum of all the constituents being equal
to~1, we find
\begin{equation}\label{eq:absurdity}
1 = 0,
\end{equation}
that is, absurdity, and this confirms the paradoxical proposition
that the false (the absurd) implies any proposition
(is its cause). In the second case, we obtain simply the
given equality, which thus appears as one of its own causes
(by the principle of identity):
\begin{equation}\label{eq:equality}
ab' + bc' = 0.
\end{equation}
If we disregard these two extreme causes, the number of
causes properly so called will be
\begin{displaymath}
2^{2^n - m} - 2.
\end{displaymath}
\index{Causes!Law of|)}
\section{Forms of Consequences and Causes}\label{ch:46}
\index{Causes!Forms of}\index{Forms of Causes}
\index{Consequences!Forms of}\index{Forms of Consequences}
We can
apply the law of forms to the consequences and causes of a
given equality so as to obtain all the forms possible to each
of them. Since any equality is equivalent to one of the two forms
\begin{displaymath}
N=0, \qquad N'=1,
\end{displaymath}
each of its consequences has the form%
\footnote{In \S\ref{ch:44} we said that a consequence is obtained by taking a part
of the constituents of the first member $N$, and not by multiplying it by
a term~$X$; but it is easily seen that this amounts to the same thing.
For, suppose that $X$ (like $N$) be developed with respect to the $n$ terms
of discourse. It will be composed of a certain number of constituents.
To perform the multiplication of $N$ by $X$, it is sufficient to multiply
all their constituents each by each. Now, the product of two identical
constituents is equal to each of them, and the product of two different
constituents is 0. Hence the product of $N$ by $X$ becomes reduced to
the sum of the constituents common to $N$ and $X$, which is, of course,
contained in $N$. So, to multiply $N$ by an arbitrary term is tantamount
to taking a part of its constituents (or all, or none).}
\begin{displaymath}
NX = 0, \qquad \text{or } N' + X' = 1,
\end{displaymath}
and each of its causes has the form
\begin{displaymath}
N + X = 0, \qquad \text{or } N' X' = 1.
\end{displaymath}
In fact, we have the following formal implications:
\begin{gather*}
(N + X = 0) < (N = 0) < (N X = 0),\\
(N' X' = 1) < (N' = 1) = (N' + X' = 1).
\end{gather*}
Applying the law of forms, the formula of the consequences
becomes
\begin{displaymath}
U = (N' + X') U + N X U',
\end{displaymath}
and the formula of the causes
\begin{displaymath}
U = N' X' U + (N + X) U';
\end{displaymath}
or, more generally, since~$X$ and~$X'$ are indeterminate terms,
and consequently are not necessarily the negatives of each
other, the formula of the consequences will be
\begin{displaymath}
U = (N' + X) U + N Y U',
\end{displaymath}
and the formula of the causes
\begin{displaymath}
U = N' X U + (N + Y) U'.
\end{displaymath}
The first denotes that $U$ is contained in $(N' + X)$ and
contains $N Y$; which indeed results, \emph{a fortiori}, from the hypothesis
that $U$ is contained in $N'$ and contains $N$.
The second formula denotes that $U$ is contained in $N' X$
and contains $N' + Y$ whence results, \emph{a fortiori}, that $U$ is
contained in $N'$ and contains $N$.
We can express this rule verbally if we agree to call
every class contained in another a \emph{sub-class}, and every
class that contains another a \emph{super-class}. We then say:
To obtain all the consequences of an equality (put in the
form $U = N' U + N U'$), it is sufficient to substitute for its
logical whole $N'$ all its super-classes, and, for its logical
zero N, all its sub-classes. Conversely, to obtain all the
causes of the same equality, it is sufficient to substitute for
its logical whole all its sub-classes, and for its logical zero,
all its super-classes.
\section{Example: Venn's Problem}\label{ch:47}
\index{Council!Members of}
\emph{The members of the administrative council of a financial society
are either bondholders or shareholders, but not both. Now, all the
bondholders
form a part of the council. What conclusion must we draw?}
Let~$a$ be the class of the members of the council; let~$b$
be the class of the bondholders and~$c$ that of the shareholders.
The data of the problem may be expressed as
follows:
\begin{displaymath}
a < bc' + b'c, \qquad b < a.
\end{displaymath}
Reducing to a single developed equality,
\begin{gather}
\notag a(b c = b' c') = 0, \qquad a' b = 0,\\
\label{eq:47.1} a b c + a b' c' + a' b c + a' b c' = 0.
\end{gather}
This equality, which contains 4 of the constituents, is
equivalent to the following, which contains the four others,
\begin{equation}\label{eq:47.2}
a b c' + a b'c + a' b' c + a' b' c' = 1.
\end{equation}
This equality may be expressed in as many different forms
as there are classes in the universe of the three terms
$a, b, c$.
\begin{gather*}
\tag*{Ex. 1.} a = a b c' + a b' c + a' b c + a' b c',\\
\intertext{that is,}
b < a < b c' + b' c, \\
\tag*{Ex. 2.} b = a b c' + a b' c = a c';\\
\tag*{Ex. 3.} c = a b' c + a' b' c + a b' c' + a' b c'\\
\intertext{that is,}
a b' + a' b < c < b'.
\end{gather*}
These are the solutions obtained by solving equation (\ref{eq:47.1})
with respect to~$a$, $b$, and~$c$.
From equality (\ref{eq:47.1}) we can derive 16 consequences%
\index{Consequences!Sixteen} as follows :
\begin{align*}
\tag*{1.} a b c &= 0;\\
\tag*{2.} (a b' c'= 0) &= (a < b + c);\\
\tag*{3.} (a' b c = 0) &= (b c < a);\\
\tag*{4.} (a' b c = 0) &= (b < a + c);\\
\tag*{5.} (a b c + a b' c' = 0) &= (a < b c' + b' c) \text{ [1$^{\text{st}}$ premise]};\\
\tag*{6.} (a b c + a' b c = 0) &= (b c = 0);\\
\tag*{7.} (a b c + a' b c' = 0) &= (b < a c' + a' c);\\
\tag*{8.} (a b' c' + a' b c = 0) &= (b c < a < b + c);\\
\tag*{9.} (a b' c' + a' b c' = 0) &= (a b' + a' b < c);\\
\tag*{10.} (a' b c + a' b c' = 0) &= (a' b = 0) \text{ [2$^{\text{d}}$ premise]};\\
\tag*{11.} (a b c + a b' c' + a' b c = 0) &= (b c + a b' c' = 0);\\
\tag*{12.} a b c + a b' c + a' b c' &= 0;\\
\tag*{13.} (a b c + a' b c + a' b c' = 0) &= (b c + a' b c') = 0;\\
\tag*{14.} a b' c' + a' b c + a' b c' &= 0.
\end{align*}
The last two consequences, as we know, are the identity
$(0 = 0)$ and the equality (\ref{eq:47.1}) itself. Among the preceding
consequences will be especially noted the 6\th{} ($b c = 0$), the
resultant of the elimination of $a$, and the 10\th{} ($a' b = 0$),
the resultant of the elimination of $c$. When $b$ is eliminated
the resultant is the identity
\begin{displaymath}
[(a' + c) a c' = 0] = (0=0).
\end{displaymath}
Finally, we can deduce from the equality (\ref{eq:47.1}) or its equivalent
(\ref{eq:47.2}) the following 16 causes:\index{Causes!Sixteen}
\begin{align*}
\tag*{1.} (a b c' = 1) &= (a=1)(b=1)(c=0);\\
\tag*{2.} (a b' c = 1) &= (a=1)(b=0)(c=1);\\
\tag*{3.} (a' b' c = 1) &= (a=0)(b=0)(c=1);\\
\tag*{4.} (a' b' c' = 1) &= (a=0)(b=0)(c=0);\\
\tag*{5.} (a b c' + a b' c = 1) &= (a=1)(b'=c);\\
\tag*{6.} (a b c' + a' b' c = 1) &= (a=b=c');\\
\tag*{7.} (a b c' + a' b' c' = 1) &= (c=0)(a=b);\\
\tag*{8.} (a b' c + a' b' c = 1) &= (b=0)(c=1);\\
\tag*{9.} (a b' c + a' b' c' = 1) &= (b=0)(a=c);\\
\tag*{10.} (a' b' c + a' b' c' = 1) &= (a=0)(b=0);\\
\tag*{11.} (a b c' + a b' c + a' b' c = 1) &= (b = c')(c' < a);\\
\tag*{12.} (a b c' + a b' c + a' b' c' = 1) &= (b c = 0)(a = b + c);\\
\tag*{13.} (a b c' + a' b' c + a' b' c' = 1) &= (a c = 0)(a = b);\\
\tag*{14.} (a b' c + a' b' c + a' b' c' = 1) &= (b = o)(a < c).
\end{align*}
The last two causes, as we know, are the equality (\ref{eq:47.1})
itself and the absurdity ($1 = 0$). It is evident that the
cause independent of $a$ is the 8\th{} $(b = 0)(c = 1)$, and the
cause independent of $c$ is the 10\th{} $(a = 0)(b = 0)$. There
is no cause, properly speaking, independent of $b$. The most
``natural'' cause, the one which may be at once divined
simply by the exercise of common sense, is the 12\th{}:
\begin{displaymath}
(b c = 0)(a = b + c).
\end{displaymath}
But other causes are just as possible; for instance the 9\th{}
$(b = 0) (a = c)$, the 7\th{} $(c = 0) (a = b)$, or the 13\th{}
$(a c = 0) (a = b)$.
We see that this method furnishes the complete enumeration
of all possible cases. In particular, it comprises, among
the \emph{forms} of an equality, the solutions deducible therefrom
with respect to such and such an ``unknown quantity'', and,
among the \emph{consequences} of an equality, the resultants of the
elimination of such and such a term.
\section{The Geometrical Diagrams of Venn}\label{ch:48}
\author{}{Poretsky's}
method may be looked upon as the perfection of the methods of
\author{}{Stanley Jevons} and \author{}{Venn.}
Conversely, it finds in them a geometrical and mechanical
illustration, for \author{}{Venn's} method is translated in
geometrical diagrams which represent all the constituents, so
that, in order to obtain the result, we need only strike out (by
shading) those which are made to vanish by the data of the
problem. For instance, the universe of three terms $a, b, c$,
represented by the unbounded plane, is divided by three simple
closed contours into eight regions which represent the eight
constituents (Fig.~\ref{fig:1}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{venn.1}
\caption{}
\label{fig:1}
\end{figure}
To represent geometrically the data of \author{}{Venn's} problem
we must strike out the regions $a b c$, $a b' c'$, $a' b c$ and
$a' b c'$; there will then remain the regions $a b c'$, $a b' c$,
$a' b' c$, and $a' b' c'$ which will constitute the universe
\emph{relative to the problem}, being what \author{}{Poretsky}
calls his \emph{logical whole} (Fig.~\ref{fig:2}). Then every
class will be contained in this universe, which will give for each
class the expression resulting from the data of the problem. Thus,
simply by inspecting the diagram, we see that the region $b c$
does not exist (being struck out); that the region $b$ is reduced
to $a b' c'$ (hence to $a b$); that all $a$ is $b$ or $c$, and so
on.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.25]{venn.2}\\
\caption{}
\label{fig:2}
\end{figure}
This diagrammatic method has, however, serious inconveniences as a
method for solving logical problems. It does not show how the data
are exhibited by canceling certain constituents, nor does it show
how to combine the remaining constituents so as to obtain the
consequences sought. In short, it serves only to exhibit one
single step in the argument, namely the equation of the problem;
it dispenses neither with the previous steps, \emph{i.e.},
``throwing of the problem into an equation'' and the
transformation of the premises, nor with the subsequent steps,
\emph{i.e.}, the combinations that lead to the various
consequences. Hence it is of very little use, inasmuch as the
constituents can be represented by algebraic symbols quite as well
as by plane regions, and are much easier to deal with in this
form.
\section{The Logical Machine of Jevons}\label{ch:49}
In order to make his diagrams more tractable, \author{}{Venn}
proposed a mechanical device by which the plane regions to be
struck out could be lowered and caused to disappear. But
\author{}{Jevons} invented a more complete mechanism, a sort of
\emph{logical piano}. The keyboard of this instrument was composed
of keys indicating the various simple terms $(a, b, c, d)$, their
negatives, and the signs $+$ and $=$. Another part of the
instrument consisted of a panel with movable tablets on which were
written all the combinations of simple terms and their negatives;
that is, all the constituents of the universe of discourse.
Instead of writing out the equalities which represent the
premises, they are ``played'' on a keyboard like that of a
typewriter. The result is that the constituents which vanish
because of the premises disappear from the panel. When all the
premises have been ``played'', the panel shows only those
constituents whose sum is equal to~1, that is, forms the universe
with respect to the problem, its logical whole. This mechanical
method has the advantage over \author{}{Venn's} geometrical method
of performing automatically the ``throwing into an equation'',
although the premises must first be expressed in the form of
equalities; but it throws no more light than the geometrical
method on the operations to be performed in order to draw the
consequences from the data displayed on the panel.
\section{Table of Consequences}
\label{ch:50}\index{Consequences!Table of|(} But
\author{}{Poretsky's} method can be illustrated, better than by
geometrical and mechanical devices, by the construction of a table
which will exhibit directly all the consequences and all the
causes of a given equality. (This table is relative to this
equality and each equality requires a different table). Each table
comprises the $2^n$ classes that can be defined and distinguished
in the universe of discourse of $n$ terms. We know that an
equality consists in the annulment of a certain number of these
classes, viz., of those which have for constituents some of the
constituents of its \emph{logical zero} $N$. Let $m$ be the number
of these latter constituents, then the number of the subclasses of
$N$ is $2^m$ which, therefore, is the number of classes of the
universe which vanish in consequence of the equality considered.
Arrange them in a column commencing with 0 and ending with $N$
(the two extremes). On the other hand, given any class at all, any
preceding class may be added to it without altering its value,
since by hypothesis they are null (in the problem under
consideration). Consequently, by the data of the problem, each
class is equal to $2^m$ classes (including itself). Thus, the
assemblage of the $2^n$ classes of discourse is divided into
$2^{n-m}$ series of $2^m$ classes, each series being constituted
by the sums of a certain class and of the $2^m$ classes of the
first column (sub-classes of $N$). Hence we can arrange these
$2^m$ sums in the following columns by making them correspond
horizontally to the classes of the first column which gave rise to
them. Let us take, for instance, the very simple equality $a = b$,
which is equivalent to
\begin{displaymath}
ab' + a'b = 0.
\end{displaymath}
The logical zero ($N$) in this case is $a b' + a' b$. It comprises
two constituents and consequently four sub-classes: $0$, $a b'$,
$a' b$, and $a b' + a' b$. These will compose the first column.
The other classes of discourse are $a b$, $a' b'$, $a b + a' b'$,
and those obtained by adding to each of them the four classes of
the first column. In this way, the following table is obtained:
\begin{displaymath}
\begin{matrix}
0 & a b & a' b' & a b + a' b' \\
a b' & a & b' & a + b' \\
a' b & b & a' & a' + b \\
a b' + a'b & a + b & a' + b' & 1
\end{matrix}
\end{displaymath}
By construction, each class of this table is the sum of
those at the head of its row and of its column, and, by the
data of the problem, it is equal to each of those in the
same column. Thus we have 64 different consequences for
any equality in the universe of discourse of 2 letters. They
comprise 16 identities (obtained by equating each class to
itself) and 16 forms of the given equality, obtained by
equating the classes which correspond in each row to the
classes which are known to be equal to them, namely
\begin{displaymath}
\begin{matrix}
0 = a b' + a' b, & a b = a + b, & a' b' = a' + b' & a b + a' b' = 1 \\
a = b, & b' = a', & a b' = a' b, & a + b' = a' + b.
\end{matrix}
\end{displaymath}
Each of these 8 equalities counts for two, according as it
is considered as a determination of one or the other of its
members.
\index{Consequences!Table of|)}
\section{Table of Causes}\label{ch:51}\index{Causes!Table of|(}
The same table may serve to
represent all the causes of the same equality in accordance
with the following theorem:
When the consequences of an equality $N = 0$ are expressed
in the form of determinations of any class $U$, the
causes of this equality are deduced from the consequences
of the \emph{opposite} equality, $N = 1$, put in the same form,
by changing $U$ to $U'$ in one of the two members.
For we know that the consequences of the equality $N = 0$
have the form
\begin{displaymath}
U = (N' + X) U + N Y U',
\end{displaymath}
and that the causes of the same equality have the form
\begin{displaymath}
U = N' X U + (N + Y) U'.
\end{displaymath}
Now, if we change $U$ into $U'$ in one of the members of
this last formula, it becomes
\begin{displaymath}
U = (N + X') U + N' Y' U',
\end{displaymath}
and the accents of $X$ and $Y$ can be suppressed since these
letters represent indeterminate classes. But then we have
the formula of the consequences of the equality $N' = 0$ or
$N = 1$.
This theorem being established, let us construct, for instance,
the table of causes of the equality $a = b$. This will
be the table of the consequences of the opposite equality
$a = b'$, for the first is equivalent to
\begin{displaymath}
a b' + a' b = 0,
\end{displaymath}
and the second to
\begin{align*}
(a b + a' b' = 0) = (a b' + a' b = 1).\\
\begin{matrix}
0 & a b' & a' b & a b' + a' b \\
a b & a & b & a + b \\
a' b' & b' & a' & a' + b' \\
a b + a' b' & a + b' & a' + b & 1
\end{matrix}
\end{align*}
To derive the causes of the equality $a = b$ from this table
instead of the consequences of the opposite equality $a = b'$,
it is sufficient to equate the negative of each class to each
of the classes in the same column. Examples are:
\begin{displaymath}
\begin{matrix}
a' + b' = 0, & a' + b' = a' b', & a' + b' = a b + a' b', \\
a' + b = a, & a' + b = b', & a' + b = a + b';\ldots.
\end{matrix}
\end{displaymath}
Among the 64 causes of the equality under consideration
there are 16 absurdities (consisting in equating each class of
the table to its negative); and 16 forms of the equality (the
same, of course, as in the table of consequences, for two
equivalent equalities are at the same time both cause and
consequence of each other).
It will be noted that the table of causes differs from the table
of consequences only in the fact that it is symmetrical to the
other table with respect to the principal diagonal $(0, 1)$; hence
they can be made identical by substituting the word ``row'' for
the word ``column'' in the foregoing statement. And, indeed, since
the rule of the consequences concerns only classes of the same
column, we are at liberty so to arrange the classes in each column
on the rows that the rule of the causes will be verified by the
classes in the same row.
It will be noted, moreover, that, by the method of construction
adopted for this table, the classes which are the
negatives of each other occupy positions symmetrical with
respect to the center of the table. For this result, the subclasses
of the class $N'$ (the logical whole of the given
equality or the logical zero of the opposite equality) must
be placed in the first row in their natural order from 0 to $N'$;
then, in each division, must be placed the sum of the classes
at the head of its row and column.
With this precaution, we may sum up the two rules in the
following practical statement:
To obtain every consequence of the given equality (to
which the table relates) it is sufficient to equate each class
to every class in the same column; and, to obtain every
cause, it is sufficient to equate each class to every class in
the row occupied by its symmetrical class.
It is clear that the table relating to the equality $N = 0$
can also serve for the opposite equality $N = 1$, on condition
that the words ``row'' and ``column'' in the foregoing statement
be interchanged.
Of course the construction of the table relating to a given
equality is useful and profitable only when we wish to
enumerate all the consequences or the causes of this equality.
If we desire only one particular consequence or cause
relating to this or that class of the discourse, we make use
of one of the formulas given above.
\index{Causes!Table of|)}
\section{The Number of Possible Assertions}\label{ch:52}
If we regard logical functions and equations as developed with
respect to \emph{all} the letters, we can calculate the number of
assertions or different problems that may be formulated about $n$
simple terms. For all the functions thus developed can contain
only those constituents which have the coefficient 1 or the
coefficient 0 (and in the latter case, they do not contain them).
Hence they are additive combinations of these constituents; and,
since the number of the constituents is $2^n$, the number of
possible functions is $2^{2^n}$. From this must be deducted the
function in which all constituents are absent, which is
identically 0, leaving $2^{2^n}-1$ possible equations (255 when $n
= 3$). But these equations, in their turn, may be combined by
logical addition, \emph{i.e.}, by alternation; hence the number of
their combinations is $2^{2^{2^n}-1}-1$, excepting always the
null combination. This is the number of possible assertions%
\index{Assertions!Number of possible}
affecting $n$ terms. When $n = 2$, this number is as high as
32767.%
\footnote{\author{}{G. Peano,} \emph{Calcolo geometrico} (1888)
p.~x; \author{}{Schröder,} \emph{Algebra der Logik}, Vol. II,
p.~144--148.} We must observe that only universal premises are
admitted in this calculus, as will be explained in the following
section.
\section{Particular Propositions}\label{ch:53}
Hitherto we have only considered propositions with an
\emph{affirmative} copula (\emph{i.e.}, inclusions or equalities)
corresponding to the \emph{universal} propositions
of classical logic.%
\footnote{The \emph{universal affirmative}, ``All $a$'s are $b$'s'', may be expressed by
the formulas
\begin{displaymath}
(a < b) = (a = a b) = (a b' = 0) = (a' + b = 1),
\end{displaymath}
and the \emph{universal negative}, ``No $a$'s are $b$'s'', by the formulas
\begin{displaymath}
(a < b') = (a = a b') = (a b = 0) = (a' + b' = 1).
\end{displaymath}} It remains for us to study propositions
with a \emph{negative} copula (non inclusions or inequalities),
which translate \emph{particular} propositions%
\footnote{For the \emph{particular affirmative}, ``Some $a$'s are $b$'s'', being the negation
of the universal negative, is expressed by the formulas
\begin{displaymath}
(a \nless b') = (a \neq a b') = (a b \neq 0) = (a'+ b' \neq 1),
\end{displaymath}
and the \emph{particular negative}, ``Some $a$'s are not $b$'s'', being the negation
of the universal affirmative, is expressed by the formulas
\begin{displaymath}
(a \nless b) = (a \neq a b) = (a b' \neq 0) = (a'+ b \neq 1).
\end{displaymath}}; but the calculus of
propositions having a negative copula results from laws already
known, especially from the formulas of \author{}{De Morgan}%
\index{De Morgan!Formulas of} and the law of contraposition. We shall
enumerate the chief formulas derived from it.
The principle of composition gives rise to the following
formulas:
\begin{align*}
(c \nless ab) &= (c \nless a) + (c \nless b), \\
(a + b \nless c) &= (a \nless c) + (b \nless c),
\end{align*}
whence come the particular instances
\begin{align*}
(a b \neq 1) &= (a \neq 1) + (b \neq 1), \\
(a + b \neq 0) &= (a \neq 0) + (c \neq 0).
\end{align*}
From these may be deduced the following important implications:
\begin{align*}
(a \neq 0) &< (a + b \neq 0),\\
(a \neq 1) &< (a b \neq 1).
\end{align*}
From the principle of the syllogism, we deduce, by the
law of transposition,
\begin{align*}
(a < b) (a \neq 0) < (b \neq 0),\\
(a < b) (b \neq 1) < (a \neq 1).
\end{align*}
The formulas for transforming inclusions and equalities give
corresponding formulas for the transformation of non-inclusions
and inequalities,
\begin{align*}
(a \nless b) = (a b' \neq 0) &= (a' + b \neq 1),\\
(a \neq b) = (a b' + a' b \neq 0) &= (ab + a'b' + 1).
\end{align*}
\section{Solution of an Inequation with One Unknown}\label{ch:54}
If
we consider the conditional inequality (\emph{inequation}) with
one unknown
\begin{displaymath}
a x + b x \neq 0,
\end{displaymath}
we know that its first member is contained in the sum of
its coefficients
\begin{displaymath}
a x + b x' < a + b.
\end{displaymath}
From this we conclude that, if this inequation is verified,
we have the inequality
\begin{displaymath}
a + b \neq 0.
\end{displaymath}
This is the necessary condition of the solvability of the
inequation, and the resultant of the elimination of the
unknown $x$. For, since we have the equivalence
\begin{displaymath}
\prod_x (ax + bx' = 0) = (a + b = 0),
\end{displaymath}
we have also by contraposition the equivalence
\begin{displaymath}
\sum_x (ax + bx' \neq 0) = (a + b \neq 0).
\end{displaymath}
Likewise, from the equivalence
\begin{displaymath}
\sum_x (ax + bx' = 0) = (ab = 0)
\end{displaymath}
we can deduce the equivalence
\begin{displaymath}
\prod_x (ax + bx'\neq 0) = (ab \neq 0),
\end{displaymath}
which signifies that the necessary and sufficient condition%
\index{Condition!Necessary and sufficient} for the inequation to be always
true is
\begin{displaymath}
(ab \neq 0);
\end{displaymath}
and, indeed, we know that in this case the equation
\begin{displaymath}
(ax + bx' = 0)
\end{displaymath}
is impossible (never true).
Since, moreover, we have the equivalence
\begin{displaymath}
(ax + bx' = 0) = (x = a'x + bx'),
\end{displaymath}
we have also the equivalence
\begin{displaymath}
(ax + bx' \neq 0)=(x \neq a'x + bx').
\end{displaymath}
Notice the significance of this solution:
\begin{displaymath}
(ax + bx' \neq 0) = (ax \neq 0) + (bx' \neq 0) = (x \nless a') + (b \nless x).
\end{displaymath}
``Either $x$ is not contained in $a'$, or it does not contain $b$''.
This is the negative of the double inclusion
\begin{displaymath}
b< x< a.
\end{displaymath}
Just as the product of several equalities is reduced to one
single equality, the sum (the alternative) of several inequalities
may be reduced to a single inequality. But neither several
alternative equalities nor several simultaneous inequalities can
be reduced to one.
\section{System of an Equation and an Inequation}\label{ch:55}
We
shall limit our study to the case of a simultaneous equality
and inequality. For instance, let the two premises be
\begin{displaymath}
(ax + bx' = 0) \quad (cx + dx' \neq 0).
\end{displaymath}
To satisfy the former (the equation) its resultant $ab = 0$
must be verified. The solution of this equation is
\begin{displaymath}
x=a'x+bx'.
\end{displaymath}
Substituting this expression (which is equivalent to the
equation) in the inequation, the latter becomes
\begin{displaymath}
(a'c + ad) x + (bc + b'd) x' \neq 0.
\end{displaymath}
Its resultant (the condition of its solvability) is
\begin{displaymath}
(a'c + ad + bc + b'd \neq 0) = [(a' + b) c + (a + b') d \neq 0],
\end{displaymath}
which, taking into account the resultant of the equality,
\begin{displaymath}
(ab = 0) = (a' + b = a') = (a + b' = b')
\end{displaymath}
may be reduced to
\begin{displaymath}
a'c + b'd \neq 0.
\end{displaymath}
The same result may be reached by observing that the
equality is equivalent to the two inclusions
\begin{displaymath}
(x < a') (x' < b'),
\end{displaymath}
and by multiplying both members of each by the same term
\begin{align*}
(cx < a'c)(dx' < b'd) &< (cx+dx' < a'c + b'd) \\
(cx + dx' \neq 0) &< (a'c + b'd \neq 0).
\end{align*}
This resultant implies the resultant of the inequality taken alone
\begin{displaymath}
c + d \neq 0,
\end{displaymath}
so that we do not need to take the latter into account. It is
therefore sufficient to add to it the resultant of the equality to
have the complete resultant of the proposed system
\begin{displaymath}
(ab = 0) (a'c + b'd \neq 0).
\end{displaymath}
The solution of the transformed inequality (which consequently
involves the solution of the equality) is
\begin{displaymath}
x \neq (a'c' + ad')x + (bc + b'd)x'.
\end{displaymath}
\section{Formulas Peculiar to the Calculus of Propositions.}\label{ch:56}
All the formulas which we have hitherto noted are valid
alike for propositions and for concepts. We shall now
establish a series of formulas which are valid only for propositions,
because all of them are derived from an axiom
peculiar to the calculus of propositions, which may be called
the \emph{principle of assertion}.%
\index{Assertion!Principle of}
This axiom is as follows:
\begin{axiom}\label{axiom:X}\index{Axioms}
\begin{displaymath}
(a = 1) = a.
\end{displaymath}
\end{axiom}
P.~I.: To say that a proposition a is true is to state the
proposition itself. In other words, to state a proposition is
to affirm the truth of that proposition.%
\footnote{We can see at once that this formula is not susceptible of a conceptual
interpretation (C.~I.); for, if $a$ is a concept, $(a = 1)$ is a proposition,
and we would then have a logical equality (identity) between
a concept and a proposition, which is absurd.}
\emph{Corollary}:
\begin{displaymath}
a' = (a' = 1) = (a = 0).
\end{displaymath}
P.~I.: The negative of a proposition $a$ is equivalent to the
affirmation that this proposition is false.
By Ax.~\ref{axiom:IX} (\S\ref{ch:20}), we already have
\begin{displaymath}
(a = 1) (a = 0) = 0,
\end{displaymath}
``A proposition cannot be both true and false at the same
time'', for
\begin{displaymath}
\tag{Syll.} (a = 1) (a = 0) < (1=0) = 0.
\end{displaymath}
But now, according to Ax.~\ref{axiom:X}, we have
\begin{displaymath}
(a = 1) + (a = 0) = a + a' = 1.
\end{displaymath}
``A proposition is either true or false''. From these two
formulas combined we deduce directly that the propositions
$(a = 1)$ and $(a = 0)$ are contradictory, \emph{i.e.},
\begin{displaymath}
(a \neq 1) = (a = 0), \qquad (a \neq 0) = (a = 1).
\end{displaymath}
From the point of view of calculation Ax.~\ref{axiom:X} makes it possible
to reduce to its first member every equality whose second member is 1, and
to transform inequalities into equalities. Of course these equalities and
inequalities must have propositions as their members. Nevertheless all the
formulas of this section are also valid for classes in the particular case
where the universe of discourse contains only one element, for then there
are no classes but 0 and 1. In short, the special calculus of propositions
is equivalent to the calculus of classes%
\index{Classes!Calculus of}\index{Calculus!of classes} when the classes can
possess only the two values $0$ and $1$.
\section{Equivalence of an Implication and an Alternative}\label{ch:57}
The fundamental equivalence%
\index{Alternative!Equivalence of an implication and an}
\begin{displaymath}
(a < b) =( a' + b = 1)
\end{displaymath}
gives rise, by Ax.~\ref{axiom:X}, to the equivalence
\begin{displaymath}
(a < b) = (a' + b)
\end{displaymath}
which is no less fundamental in the calculus of propositions.
To say that $a$ implies $b$ is the same as affirming ``not-$a$ or
$b$'', \emph{i.e.}, ``either $a$ is false or $b$ is true.'' This equivalence
is often employed in every day conversation.
\emph{Corollary}.---For any equality, we have the equivalence
\begin{displaymath}
(a = b) = ab + a'b'.
\end{displaymath}
\emph{Demonstration:}
\begin{displaymath}
(a = b) = (a < b) (b < a) = (a' + b) (b' + a) = ab + a'b'
\end{displaymath}
``To affirm that two propositions are equal (equivalent)
is the same as stating that either both are true or both are
false''.
The fundamental equivalence established above has important
consequences which we shall enumerate.
In the first place, it makes it possible to reduce secondary,
tertiary, etc., propositions to primary propositions, or even
to sums (alternatives) of elementary propositions. For it
makes it possible to suppress the copula of any proposition,
and consequently to lower its order of complexity. An implication
($A < B$), in which $A$ and $B$ represent propositions
more or less complex, is reduced to the sum $A' + B$, in
which only copulas within $A$ and $B$ appear, that is, propositions
of an inferior order. Likewise an equality ($A = B$)
is reduced to the sum ($AB + A'B'$) which is of a lower
order.
We know that the principle of composition%
\index{Composition!Principle of} makes it possible to combine several
\emph{simultaneous} inclusions or equalities, but we cannot combine
alternative inclusions or equalities, or at least the result is not
equivalent to their alternative but is only a consequence of it. In short,
we have only the \emph{implications}
\begin{align*}
(a < c) + (b < c) &< (ab < c), \\
(c < a) + (c < b) &< (c < a + b),
\end{align*}
which, in the special cases where $c = 0$ and $c = 1$, become
\begin{align*}
(a = 0) + (b = 0) &< (ab = 0), \\
(a = 1) + (b = 1) &< (a + b = 1).
\end{align*}
In the calculus of classes,%
\index{Classes!Calculus of}\index{Calculus!of classes} the converse
implications are not valid, for, from the statement that the class $ab$ is
null, we cannot conclude that one of the classes $a$ or $b$ is null (they
can be not-null and still not have any element in common); and from the
statement that the sum $(a + b)$ is equal to 1 we cannot conclude that
either $a$ or $b$ is equal to 1 (these classes can \emph{together} comprise
all the elements of the universe without any of them \emph{alone}
comprising all). But these converse implications are true in the calculus
of propositions
\begin{align*}
(ab < c) &< (a < c) + (b < c), \\
(c < a + b) &< (c < a) + (c < b);
\end{align*}
for they are deduced from the equivalence established above, or
rather we may deduce from it the corresponding equalities which
imply them,
\begin{align*}
\tag{1} (ab < c) &= (a < c) + (b < c), \\
\tag{2} (c < a + b) &= (c < a) + (c < b).
\end{align*}
\emph{Demonstration:}
\begin{gather*}
\tag{1} (ab < c) = a' + b' + c, \\
(a < c) + (b < c) = (a' + c) + (b' + c) = a' + b' + c; \\
\tag{2} (c < a + b) = c' + a + b, \\
(c < a) + (c < b) = (c' + a) + (c' + b) = c' + a + b.
\end{gather*}
In the special cases where $c = 0$ and $c = 1$ respectively,
we find
\begin{gather*}
\tag{3} (ab = 0) = (a = 0) + (b = 0), \\
\tag{4} (a + b = 1) = (a = 1) + (b = 1).
\end{gather*}
P.~I.: (1) To say that two propositions united imply a
third is to say that one of them implies this third proposition.
(2) To say that a proposition implies the alternative of
two others is to say that it implies one of them.
(3) To say that two propositions combined are false is to
say that one of them is false.
(4) To say that the alternative of two propositions is true
is to say that one of them is true.
The paradoxical character of the first three of these statements
will be noted in contrast to the self-evident character of the
fourth. These paradoxes are explained, on the one hand, by the
special axiom which states that a proposition is either true or
false; and, on the other hand, by the fact that the false implies
the true and that \emph{only} the false is not implied by the
true. For instance, if both premises in the first statement are
true, each of them implies the consequence, and if one of them is
false, it implies the consequence (true or false). In the second,
if the alternative is true, one of its terms must be true, and
consequently will, like the alternative, be implied by the premise
(true or false). Finally, in the third, the product of two
propositions cannot be false unless one of them is false, for, if
both were true, their product would be true (equal to 1).
\section{Law of Importation and Exportation}\label{ch:58}
The fundamental
equivalence $(a < b) = a' + b$ has many other interesting
consequences. One of the most important of these
is \emph{the law of importation and exportation}, which is expressed
by the following formula:
\begin{displaymath}
[a < (b < c)] = (ab < c).
\end{displaymath}
``To say that if $a$ is true $b$ implies $c$, is to say that $a$
and $b$ imply $c$''.
This equality involves two converse implications: If we
infer the second member from the first, we \emph{import} into the
implication $(b < c)$ the hypothesis or condition $a$; if we infer
the first member from the second, we, on the contrary,
\emph{export} from the implication $(ab < c)$ the hypothesis $a$.
\emph{Demonstration:}
\begin{gather*}
[a < (b < c)] = a' + (b < c) = a' + b' + c, \\
(ab < c) = (ab)' + c = a' + b' + c.
\end{gather*}
\emph{Cor.} 1.---Obviously we have the equivalence
\begin{displaymath}
[a < (b < c)] = [b < (a < c)],
\end{displaymath}
since both members are equal to $(ab < c)$, by the commutative
law of multiplication.
\emph{Cor.} 2.---We have also
\begin{displaymath}
[a < (a < b)] = (a < b),
\end{displaymath}
for, by the law of importation and exportation,
\begin{displaymath}
[a < (a < b)] = (aa < b) = (a < b).
\end{displaymath}
If we apply the law of importation to the two following
formulas, of which the first results from the principle of
identity and the second expresses the principle of contraposition,%
\index{Contraposition!Principle of}
\begin{displaymath}
(a < b) < (a < b), \qquad (a < b) < (b' < a'),
\end{displaymath}
we obtain the two formulas
\begin{displaymath}
(a < b)a < b), \qquad (a < b)b' < a',
\end{displaymath}
which are the two types of \emph{hypothetical reasoning}: ``If $a$
implies $b$, and if $a$ is true, $b$ is true'' (\emph{modus ponens}); ``If $a$
implies $b$, and if $b$ is false, $a$ is false'' (\emph{modus tollens}).
\emph{Remark}. These two formulas could be directly deduced
by the principle of assertion, from the following
\begin{align*}
(a < b) (a = 1) &< (b = 1), \\
(a < b) (b = 0) &< (a = 0),
\end{align*}
which are not dependent on the law of importation and
which result from the principle of the syllogism.
From the same fundamental equivalence, we can deduce
several paradoxical formulas:
\begin{displaymath}
\tag*{1.} a < (b < a), \qquad a' < (a < b).
\end{displaymath}
``If $a$ is true, $a$ is implied by any proposition $b$; if $a$ is
false, $a$ implies any proposition $b$''. This agrees with the
known properties of~0 and~1.
\begin{displaymath}
\tag*{2.} a < [(a < b) < b], \qquad a' < [(b < a) < b'].
\end{displaymath}
``If $a$ is true, then '$a$ implies $b$' implies $b$; if $a$ is false,
then '$b$ implies $a$' implies not-$b$.''
These two formulas are other forms of hypothetical reasoning
(\emph{modus ponens} and \emph{modus tollens}).
\begin{displaymath}
\tag*{3.} [(a < b) < a] = a,%
\footnote{This formula is \author{}{Bertrand Russell's} ``principle of reduction''.
See \emph{The Principles of Mathematics}, Vol. I, p.~17 (Cambridge, 1903).}
\qquad [(b < a) < a'] = a',
\end{displaymath}
``To say that, if $a$ implies $b$, $a$ is true, is the same as affirming
$a$; to say that, if $b$ implies $a$, $a$ is false, is the same as
denying $a$''.
\emph{Demonstration:}
\begin{align*}
[(a < b) < a] &= (a' + b < a) = ab' + a = a, \\
[(b < a) < a'] &= (b' + a < a') = a'b + a' = a'.
\end{align*}
In formulas (1) and (3), in which $b$ is any term at all,
we might introduce the sign $\prod$ with respect to $b$. In the
following formula, it becomes necessary to make use of this
sign.
\begin{displaymath}
\tag*{4.} \prod_{x} \left\{[a < (b < x)] < x \right\} = ab.
\end{displaymath}
\emph{Demonstration:}
\begin{align*}
\left\{[a < (b < x)] < x \right\} &= \left\{[a' + (b < x)] < x \right\} \\
&= [(a' + b' + x) < x] = abx' + x = ab + x.
\end{align*}
We must now form the product $\prod_{x}(ab + x)$, where $x$
can assume every value, including 0 and 1. Now, it is
clear that the part common to all the terms of the form
$(ab + x)$ can only be $ab$. For, (1) $ab$ is contained in each
of the sums $(ab + x)$ and therefore in the part common to
all; (2) the part common to all the sums $(ab + x)$ must be
contained in $(ab + 0)$, that is, in $ab$. Hence this common
part is equal to $ab$,%
\footnote{This argument is general and from it we can deduce the formula
\begin{displaymath}
\prod_{x}(a + x) = a,
\end{displaymath}
whence may be derived the correlative formula
\begin{displaymath}
\sum_{x} ax = a.
\end{displaymath}
} which proved the theorem.
\section{Reduction of Inequalities to Equalities}\label{ch:59}
As we
have said, the principle of assertion enables us to reduce
inequalities to equalities by means of the following formulas:
\begin{gather*}
(a \neq 0) = (a = 1), \qquad (a \neq 1) = (a = 0), \\
(a \neq b) = (a = b').
\end{gather*}
For,
\begin{displaymath}
(a \neq b) = (ab' + a'b + 0) = (ab' + ab' = 1) = (a = b').
\end{displaymath}
Consequently, we have the paradoxical formula
\begin{displaymath}
(a \neq b) = (a = b').
\end{displaymath}
This is easily understood, for, whatever the proposition~$b$,
either it is true and its negative is false, or it is false and
its negative is true. Now, whatever the proposition~$a$ may
be, it is true or false; hence it is necessarily equal either to
$b$ or to $b'$. Thus to deny an equality (between propositions)
is to affirm the \emph{opposite} equality.
Thence it results that, in the calculus of propositions, we
do not need to take inequalities into consideration---a fact
which greatly simplifies both theory and practice. Moreover,
just as we can combine alternative equalities, we can
also combine simultaneous inequalities, since they are reducible
to equalities.
For, from the formulas previously established (\S\ref{ch:57})
\begin{align*}
(ab = 0) &= (a = 0) + (b = 0),\\
(a + b = 1) &= (a = 1) + (b = 1),
\end{align*}
we deduce by contraposition
\begin{align*}
(a \neq 0) (b \neq 0) &= (ab \neq 0),\\
(a \neq 1) (b \neq 1) &= (a + b \neq 1).
\end{align*}
These two formulas, moreover, according to what we have
just said, are equivalent to the known formulas
\begin{align*}
(a = 1) (b = 1) &= (ab = 1),\\
(a = 0) (b = 0) &= (a + b = 0).
\end{align*}
Therefore, in the calculus of propositions, we can solve
all simultaneous systems of equalities or inequalities and all
alternative systems of equalities or inequalities, which is not
possible in the calculus of classes.%
\index{Classes!Calculus of}\index{Calculus!of classes} To this end, it is
necessary only to apply the following rule:
First reduce the inclusions to equalities and the non-inclusions
to inequalities; then reduce the equalities so that their second
members will be 1, and the inequalities so that their second
members will be 0, and transform the latter into equalities having
1 for a second member; finally, suppress the second members 1 and
the signs of equality, \emph{i.e.}, form the product of the first
members of the simultaneous equalities and the sum of the first
members of the alternative equalities, retaining the parentheses.
\section{Conclusion}\label{ch:60}
The foregoing exposition is far from being exhaustive; it does not
pretend to be a complete treatise on the algebra of logic, but
only undertakes to make known the elementary principles and
theories of that science. The algebra of logic is an algorithm
\index{Algebra!of logic an algorithm} \index{Algorithm!Algebra of
logic an}with laws peculiar to itself. In some phases it is very
analogous to ordinary algebra, and in others it is very widely
different. For instance, it does not recognize the distinction of
\emph{degrees}; the
laws of tautology and absorption%
\index{Absorption!Law of} introduce into it great
simplifications by excluding from it numerical coefficients.
It is a formal calculus which can give rise to all sorts of
theories and problems, and is susceptible of an almost infinite
development.
But at the same time it is a restricted system, and it is
important to bear in mind that it is far from embracing all
of logic. Properly speaking, it is only the algebra of
classical logic. Like this logic, it remains confined to the
domain circumscribed by Aristotle,\index{Aristotle} namely, the domain of
the relations of inclusion between concepts and the relations
of implication between propositions. It is true that classical
logic (even when shorn of its errors and superfluities) was
much more narrow than the algebra of logic. It is almost
entirely contained within the bounds of the theory of the
syllogism whose limits to-day appear very restricted and
artificial. Nevertheless, the algebra of logic simply treats,
with much more breadth and universality, problems of the
same order; it is at bottom nothing else than the theory
of classes or aggregates considered in their relations of inclusion
or identity. Now logic ought to study many other
kinds of concepts than generic concepts (concepts of classes)
and many other relations than the relation of inclusion (of
subsumption) between such concepts. It ought, in short, to
develop into a logic of relations, which \author{}{Leibniz} foresaw,
which \author{}{Peirce} and \author{}{Schröder} founded, and which \author{}{Peano} and
\author{}{Russell} seem to have established on definite foundations.
While classical logic and the algebra of logic are of hardly any
use to mathematics, mathematics, on the other hand, finds in the
logic of relations its concepts and fundamental principles; the
true logic of mathematics is the logic of relations. The algebra
of logic itself arises out of pure logic considered as a
particular mathematical theory, for it rests on principles which
have been implicitly postulated and which are not susceptible of
algebraic or symbolic expression because they are the foundation
of all symbolism and of all
the logical calculus.\footnote{The principle of deduction%
\index{Deduction!Principle of} and the principle of substitution%
\index{Substitution!Principle of}. See the author's%
\index{Couturat} \emph{Manuel de Logistique}, Chapter 1, \S\S~2
and~3 [not published], and \emph{Les Principes des Mathématiques},
Chapter~1, A.} Accordingly, we can say that the algebra of logic is
a \emph{mathematical} logic by its form and by its method, but it must
not be mistaken for the logic \emph{of mathematics}.
\cleardoublepage
\begin{theindex}
\item Absorption, Law of \item Absurdity, Type of \item Addition,
and multiplication, Logical \subitem and multiplication, Modulus
of \subitem and multiplication, Theorems on \subitem Logical, not
disjunctive \item Affirmative propositions \item Algebra, of logic
an algorithm \subitem of logic compared to mathematical algebra
\subitem of thought \item Algorithm, Algebra of logic an \item
Alphabet of human thought \item Alternative \subitem affirmation
\subitem Equivalence of an implication and an \item Antecedent
\item Aristotle \item Assertion, Principle of \item Assertions,
Number of possible \item Axioms \indexspace \item Baldwin \item
Boole \subitem Problem of \item Bryan, William Jennings
\indexspace \item Calculus, Infinitesimal \subitem Logical
\subitem \emph{ratiocinator} \item Cantor, Georg \item Categorical
syllogism \item Cause \item Causes, Forms of \subitem Law of
\subitem Sixteen \subitem Table of \item Characters \item Classes,
Calculus of \item Classification of dichotomy \item Commutativity
\item Composition, Principle of \item Concepts, Calculus of \item
Condition \subitem Necessary and sufficient \subitem Necessary but
not sufficient \subitem of impossibility and indetermination \item
\emph{Connaissances} \item Consequence \item Consequences, Forms
of \subitem Law of \subitem of the syllogism \subitem Sixteen
\subitem Table of \item Consequent \item Constituents \subitem
Properties of \item Contradiction, Principle of \item
Contradictory propositions \subitem terms \item Contraposition,
Law of \subitem Principle of \item Council, Members of \item
Couturat, v \indexspace \item Dedekind \item Deduction \subitem
Principle of \item Definition, Theory of \item De Morgan \subitem
Formulas of \item Descartes \item Development \subitem Law of
\subitem of logical functions \subitem of mathematics \subitem of
symbolic logic \item Diagrams of Venn, Geometrical \item
Dichotomy, Classification of \item Disjunctive, Logical addition
not \subitem sums \item Distributive law \item Double inclusion
\subitem expressed by an indeterminate \subitem Negative of the
\item Double negation \item Duality, Law of \indexspace \item
Economy of mental effort \item Elimination of known terms \subitem
of middle terms \subitem of unknowns \subitem Resultant of
\subitem Rule for resultant of \item Equalities, Formulas for
transforming inclusions into \subitem Reduction of inequalities to
\item Equality a primitive idea \subitem Definition of \subitem
Notion of \item Equation, and an inequation \subitem Throwing into
an \item Equations, Solution of \item Excluded middle, Principle
of \item Exclusion, Principle of \item Exclusive, Mutually \item
Existence, Postulate of \item Exhaustion, Principle of \item
Exhaustive, Collectively \indexspace \item Forms, Law of \subitem
of consequences and causes \item Frege \subitem Symbolism of \item
Functions \subitem Development of logical \subitem Integral
\subitem Limits of \subitem Logical \subitem of variables \subitem
Properties of developed \subitem Propositional \subitem Sums and
products of \subitem Values of \indexspace \item Hôpital, Marquis
de l' \item Huntington, E. V \item Hypothesis \item Hypothetical
arguments \subitem reasoning \subitem syllogism \indexspace \item
Ideas, Simple and complex \item Identity \subitem Principle of
\subitem Type of \item Ideography \item Implication \subitem and
an alternative, Equivalence of an \subitem Relations of \item
Importation and exportation, Law of \item Impossibility, Condition
of \item Inclusion \subitem a primitive idea \subitem Double
\subitem expressed by an indeterminate \subitem Negative of the
double \subitem Relation of \item Inclusions into equalities,
Formulas for transforming \item Indeterminate \subitem Inclusion
expressed by an \item Indetermination \subitem Condition of \item
Inequalities, to equalities, Reduction of \subitem Transformation
of non-inclusions and \item Inequation, Equation and an \subitem
Solution of an \item Infinitesimal calculus \item Integral
function \item Interpretations of the calculus \indexspace \item
Jevons \subitem Logical piano of \item Johnson, W. E \indexspace
\item Known terms (\emph{connaissances}) \indexspace \item
Ladd-Franklin, Mrs \item Lambert \item Leibniz \item Limits of a
function \indexspace \item MacColl \item MacFarlane, Alexander
\item Mathematical function \subitem logic \item Mathematics,
Philosophy a universal \item Maxima of discourse \item Middle,
Principle of excluded \subitem terms, Elimination of \item Minima
of discourse \item Mitchell, O \item Modulus of addition and
multiplication \item \emph{Modus ponens} \item \emph{Modus
tollens} \item Müller, Eugen \item Multiplication. See \emph{s.
v.} ``Addition.'' \indexspace \item Negation \subitem defined
\subitem Double \subitem Duality not derived from \item Negative
\subitem of the double inclusion \subitem propositions \item
Non-inclusions and inequalities, Transformation of \item Notation
\item Null-class \item Number of possible assertions \indexspace
\item One, Definition of, \indexspace \item Particular
propositions, \item Peano, \item Peirce, C. S., \item Philosophy a
universal mathematics, \item Piano of Jevons, Logical, \item
Poretsky, \subitem Formula of, \subitem Method of, \item
Predicate, \item Premise, \item Primary proposition, \item
Primitive idea, \subitem Equality a, \subitem Inclusion a, \item
Product, Logical, \item Propositions, \subitem Calculus of,
\subitem Contradictory, \subitem Formulas peculiar to the calculus
of, \subitem Implication between, \subitem reduced to lower
orders, \subitem Universal and particular, \item Reciprocal, \item
\emph{Reductio ad absurdum,} \item Reduction, Principle of, \item
Relations, Logic of, \item Relatives, Logic of, \item Resultant of
elimination, \subitem Rule for, \item Russell, B., \indexspace
\item Schröder, \subitem Theorem of, \item Secondary proposition,
\item Simplification, Principle of, \item Simultaneous
affirmation, \item Solution of equations, \subitem of inequations,
\item Subject, \item Substitution, Principle of, \item
Subsumption, \item Summand, \item Sums, \subitem and products of
functions, \subitem Disjunctive, \subitem Logical, \item
Syllogism, Principle of the, \subitem Theory of the, \item
Symbolic logic, \subitem Development of, \item Symbolism in
mathematics, \item Symbols, Origin of, \item Symmetry, \item
Tautology, Law of \item Term, \item Theorem, \item Thesis, \item
Thought, \subitem Algebra of, \subitem Alphabet of human, \subitem
Economy of, \item Transformation \subitem of inclusions into
equalities, \subitem of inequalities into equalities, \subitem of
non-inclusions and inequalities, \indexspace \item Universal
\subitem characteristic of Leibniz, \subitem propositions,
\indexspace \item Universe of discourse, \item Unknowns,
Elimination of, \indexspace \item Variables, Functions of, \item
Venn, John, \subitem metrical diagrams of, \subitem Mechanical
device of, \subitem Problem of, \item Viète, \item Voigt,
\indexspace \item Whitehead, A. N., \item Whole, Logical,
\indexspace \item Zero, \subitem Definition of, \subitem Logical,
\end{theindex}
\newpage
\chapter{PROJECT GUTENBERG "SMALL PRINT"}
\small
\pagenumbering{gobble}
\begin{verbatim}
End of the Project Gutenberg EBook of The Algebra of Logic, by Louis Couturat
*** END OF THIS PROJECT GUTENBERG EBOOK THE ALGEBRA OF LOGIC ***
***** This file should be named 10836-t.tex or 10836-t.zip *****
This and all associated files of various formats will be found in:
https://www.gutenberg.org/1/0/8/3/10836/
Produced by David Starner, Arno Peters, Susan Skinner
and the Online Distributed Proofreading Team.
Updated editions will replace the previous one--the old editions
will be renamed.
Creating the works from public domain print editions means that no
one owns a United States copyright in these works, so the Foundation
(and you!) can copy and distribute it in the United States without
permission and without paying copyright royalties. Special rules,
set forth in the General Terms of Use part of this license, apply to
copying and distributing Project Gutenberg-tm electronic works to
protect the PROJECT GUTENBERG-tm concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if you
charge for the eBooks, unless you receive specific permission. If you
do not charge anything for copies of this eBook, complying with the
rules is very easy. You may use this eBook for nearly any purpose
such as creation of derivative works, reports, performances and
research. They may be modified and printed and given away--you may do
practically ANYTHING with public domain eBooks. Redistribution is
subject to the trademark license, especially commercial
redistribution.
*** START: FULL LICENSE ***
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg-tm mission of promoting the free
distribution of electronic works, by using or distributing this work
(or any other work associated in any way with the phrase "Project
Gutenberg"), you agree to comply with all the terms of the Full Project
Gutenberg-tm License (available with this file or online at
https://gutenberg.org/license).
Section 1. General Terms of Use and Redistributing Project Gutenberg-tm
electronic works
1.A. By reading or using any part of this Project Gutenberg-tm
electronic work, you indicate that you have read, understand, agree to
and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or destroy
all copies of Project Gutenberg-tm electronic works in your possession.
If you paid a fee for obtaining a copy of or access to a Project
Gutenberg-tm electronic work and you do not agree to be bound by the
terms of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.
1.B. "Project Gutenberg" is a registered trademark. It may only be
used on or associated in any way with an electronic work by people who
agree to be bound by the terms of this agreement. There are a few
things that you can do with most Project Gutenberg-tm electronic works
even without complying with the full terms of this agreement. See
paragraph 1.C below. There are a lot of things you can do with Project
Gutenberg-tm electronic works if you follow the terms of this agreement
and help preserve free future access to Project Gutenberg-tm electronic
works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation ("the Foundation"
or PGLAF), owns a compilation copyright in the collection of Project
Gutenberg-tm electronic works. Nearly all the individual works in the
collection are in the public domain in the United States. If an
individual work is in the public domain in the United States and you are
located in the United States, we do not claim a right to prevent you from
copying, distributing, performing, displaying or creating derivative
works based on the work as long as all references to Project Gutenberg
are removed. Of course, we hope that you will support the Project
Gutenberg-tm mission of promoting free access to electronic works by
freely sharing Project Gutenberg-tm works in compliance with the terms of
this agreement for keeping the Project Gutenberg-tm name associated with
the work. You can easily comply with the terms of this agreement by
keeping this work in the same format with its attached full Project
Gutenberg-tm License when you share it without charge with others.
1.D. The copyright laws of the place where you are located also govern
what you can do with this work. Copyright laws in most countries are in
a constant state of change. If you are outside the United States, check
the laws of your country in addition to the terms of this agreement
before downloading, copying, displaying, performing, distributing or
creating derivative works based on this work or any other Project
Gutenberg-tm work. The Foundation makes no representations concerning
the copyright status of any work in any country outside the United
States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other immediate
access to, the full Project Gutenberg-tm License must appear prominently
whenever any copy of a Project Gutenberg-tm work (any work on which the
phrase "Project Gutenberg" appears, or with which the phrase "Project
Gutenberg" is associated) is accessed, displayed, performed, viewed,
copied or distributed:
This eBook is for the use of anyone anywhere at no cost and with
almost no restrictions whatsoever. You may copy it, give it away or
re-use it under the terms of the Project Gutenberg License included
with this eBook or online at www.gutenberg.org
1.E.2. If an individual Project Gutenberg-tm electronic work is derived
from the public domain (does not contain a notice indicating that it is
posted with permission of the copyright holder), the work can be copied
and distributed to anyone in the United States without paying any fees
or charges. If you are redistributing or providing access to a work
with the phrase "Project Gutenberg" associated with or appearing on the
work, you must comply either with the requirements of paragraphs 1.E.1
through 1.E.7 or obtain permission for the use of the work and the
Project Gutenberg-tm trademark as set forth in paragraphs 1.E.8 or
1.E.9.
1.E.3. If an individual Project Gutenberg-tm electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any additional
terms imposed by the copyright holder. Additional terms will be linked
to the Project Gutenberg-tm License for all works posted with the
permission of the copyright holder found at the beginning of this work.
1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm
License terms from this work, or any files containing a part of this
work or any other work associated with Project Gutenberg-tm.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg-tm License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form, including any
word processing or hypertext form. However, if you provide access to or
distribute copies of a Project Gutenberg-tm work in a format other than
"Plain Vanilla ASCII" or other format used in the official version
posted on the official Project Gutenberg-tm web site (www.gutenberg.org),
you must, at no additional cost, fee or expense to the user, provide a
copy, a means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original "Plain Vanilla ASCII" or other
form. Any alternate format must include the full Project Gutenberg-tm
License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg-tm works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg-tm electronic works provided
that
- You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg-tm works calculated using the method
you already use to calculate your applicable taxes. The fee is
owed to the owner of the Project Gutenberg-tm trademark, but he
has agreed to donate royalties under this paragraph to the
Project Gutenberg Literary Archive Foundation. Royalty payments
must be paid within 60 days following each date on which you
prepare (or are legally required to prepare) your periodic tax
returns. Royalty payments should be clearly marked as such and
sent to the Project Gutenberg Literary Archive Foundation at the
address specified in Section 4, "Information about donations to
the Project Gutenberg Literary Archive Foundation."
- You provide a full refund of any money paid by a user who notifies
you in writing (or by e-mail) within 30 days of receipt that s/he
does not agree to the terms of the full Project Gutenberg-tm
License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg-tm works.
- You provide, in accordance with paragraph 1.F.3, a full refund of any
money paid for a work or a replacement copy, if a defect in the
electronic work is discovered and reported to you within 90 days
of receipt of the work.
- You comply with all other terms of this agreement for free
distribution of Project Gutenberg-tm works.
1.E.9. If you wish to charge a fee or distribute a Project Gutenberg-tm
electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
both the Project Gutenberg Literary Archive Foundation and Michael
Hart, the owner of the Project Gutenberg-tm trademark. Contact the
Foundation as set forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend considerable
effort to identify, do copyright research on, transcribe and proofread
public domain works in creating the Project Gutenberg-tm
collection. Despite these efforts, Project Gutenberg-tm electronic
works, and the medium on which they may be stored, may contain
"Defects," such as, but not limited to, incomplete, inaccurate or
corrupt data, transcription errors, a copyright or other intellectual
property infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be read by
your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right
of Replacement or Refund" described in paragraph 1.F.3, the Project
Gutenberg Literary Archive Foundation, the owner of the Project
Gutenberg-tm trademark, and any other party distributing a Project
Gutenberg-tm electronic work under this agreement, disclaim all
liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH F3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE
LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR
INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH
DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a
defect in this electronic work within 90 days of receiving it, you can
receive a refund of the money (if any) you paid for it by sending a
written explanation to the person you received the work from. If you
received the work on a physical medium, you must return the medium with
your written explanation. The person or entity that provided you with
the defective work may elect to provide a replacement copy in lieu of a
refund. If you received the work electronically, the person or entity
providing it to you may choose to give you a second opportunity to
receive the work electronically in lieu of a refund. If the second copy
is also defective, you may demand a refund in writing without further
opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO OTHER
WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
WARRANTIES OF MERCHANTIBILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted by
the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the
trademark owner, any agent or employee of the Foundation, anyone
providing copies of Project Gutenberg-tm electronic works in accordance
with this agreement, and any volunteers associated with the production,
promotion and distribution of Project Gutenberg-tm electronic works,
harmless from all liability, costs and expenses, including legal fees,
that arise directly or indirectly from any of the following which you do
or cause to occur: (a) distribution of this or any Project Gutenberg-tm
work, (b) alteration, modification, or additions or deletions to any
Project Gutenberg-tm work, and (c) any Defect you cause.
Section 2. Information about the Mission of Project Gutenberg-tm
Project Gutenberg-tm is synonymous with the free distribution of
electronic works in formats readable by the widest variety of computers
including obsolete, old, middle-aged and new computers. It exists
because of the efforts of hundreds of volunteers and donations from
people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need, is critical to reaching Project Gutenberg-tm's
goals and ensuring that the Project Gutenberg-tm collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a secure
and permanent future for Project Gutenberg-tm and future generations.
To learn more about the Project Gutenberg Literary Archive Foundation
and how your efforts and donations can help, see Sections 3 and 4
and the Foundation web page at https://www.pglaf.org.
Section 3. Information about the Project Gutenberg Literary Archive
Foundation
The Project Gutenberg Literary Archive Foundation is a non profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation's EIN or federal tax identification
number is 64-6221541. Its 501(c)(3) letter is posted at
https://pglaf.org/fundraising. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state's laws.
The Foundation's principal office is located at 4557 Melan Dr. S.
Fairbanks, AK, 99712., but its volunteers and employees are scattered
throughout numerous locations. Its business office is located at
809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887, email
business@pglaf.org. Email contact links and up to date contact
information can be found at the Foundation's web site and official
page at https://pglaf.org
For additional contact information:
Dr. Gregory B. Newby
Chief Executive and Director
gbnewby@pglaf.org
Section 4. Information about Donations to the Project Gutenberg
Literary Archive Foundation
Project Gutenberg-tm depends upon and cannot survive without wide
spread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can be
freely distributed in machine readable form accessible by the widest
array of equipment including outdated equipment. Many small donations
($1 to $5,000) are particularly important to maintaining tax exempt
status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and keep up
with these requirements. We do not solicit donations in locations
where we have not received written confirmation of compliance. To
SEND DONATIONS or determine the status of compliance for any
particular state visit https://pglaf.org
While we cannot and do not solicit contributions from states where we
have not met the solicitation requirements, we know of no prohibition
against accepting unsolicited donations from donors in such states who
approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg Web pages for current donation
methods and addresses. Donations are accepted in a number of other
ways including including checks, online payments and credit card
donations. To donate, please visit: https://pglaf.org/donate
Section 5. General Information About Project Gutenberg-tm electronic
works.
Professor Michael S. Hart was the originator of the Project Gutenberg-tm
concept of a library of electronic works that could be freely shared
with anyone. For thirty years, he produced and distributed Project
Gutenberg-tm eBooks with only a loose network of volunteer support.
Project Gutenberg-tm eBooks are often created from several printed
editions, all of which are confirmed as Public Domain in the U.S.
unless a copyright notice is included. Thus, we do not necessarily
keep eBooks in compliance with any particular paper edition.
Each eBook is in a subdirectory of the same number as the eBook's
eBook number, often in several formats including plain vanilla ASCII,
compressed (zipped), HTML and others.
Corrected EDITIONS of our eBooks replace the old file and take over
the old filename and etext number. The replaced older file is renamed.
VERSIONS based on separate sources are treated as new eBooks receiving
new filenames and etext numbers.
Most people start at our Web site which has the main PG search facility:
https://www.gutenberg.org
This Web site includes information about Project Gutenberg-tm,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how to
subscribe to our email newsletter to hear about new eBooks.
EBooks posted prior to November 2003, with eBook numbers BELOW #10000,
are filed in directories based on their release date. If you want to
download any of these eBooks directly, rather than using the regular
search system you may utilize the following addresses and just
download by the etext year. For example:
https://www.gutenberg.org/etext06
(Or /etext 05, 04, 03, 02, 01, 00, 99,
98, 97, 96, 95, 94, 93, 92, 92, 91 or 90)
EBooks posted since November 2003, with etext numbers OVER #10000, are
filed in a different way. The year of a release date is no longer part
of the directory path. The path is based on the etext number (which is
identical to the filename). The path to the file is made up of single
digits corresponding to all but the last digit in the filename. For
example an eBook of filename 10234 would be found at:
https://www.gutenberg.org/1/0/2/3/10234
or filename 24689 would be found at:
https://www.gutenberg.org/2/4/6/8/24689
An alternative method of locating eBooks:
https://www.gutenberg.org/GUTINDEX.ALL
\end{verbatim}
\normalsize
\end{document}