(to be edited)
2. Numbers and data
7. Repetition and concatenation
8. Self representations
9. Autonomy of representations
are a kind of beings.
Could we see that representations are a kind of signifiers?
Representation per-se (per construction) or per accident (trace)
In the traditional world, an enormous ditch parted representations from the represented beings. From mass and energy standpoints, there was no common scale between a text or a painting and the person or the country described. The difference was also borne from the difficulty, and often the impossibility, to duplicate the thing, when a copy of the representation was comparatively easy. (That is of course less true for sculpture or performance arts). "The map is not the territory" was a fundamental motto of any epistemology.
We must, specially here,
"desanthromorphize" ! We do not deal here with representations for
human beings (see in special chapter), but for systems in general (finite
automata, computers (not seen for their physical aspects)...
A major point: text; the normal way human beings use is through writing or sound. In both cases, these documents include a lot of "decorative" aspects that in principle are not meaningful, not real part of the text
but how far, inside a natural language or not, can we reduce the things to a (standardized?) minimal expression
In digital universes, as long as connection with physical word is not needed by human beings, this gap vanishes. Representations of beings may have a role or mass reducing for easer use, but remain in the low physicality of electronic devices.
In parallel, human beings have learned to enjoy representations, up to the point of live purely electronic ("virtual") worlds, where representations stand for themselves, and the maps no longer need a territory to be valuable, though calling for appropriate scale changes for human efficiency or pleasure.
External representations are made by a being to be transmitted to others. In this line, a representation is a kind of message from an emitter to a receiver (possibly multiple).
Internal representations are beings referring to other ones, and making it possible to use these ones in a more economic way. In the classical discourse about computers and information systems, the reality is outside and representations value depends on this world. In DU, a major part of representations stand for themselves and are used for themselves. Art was a precursor of "second worlds".
A virtual being is not (not necessarily)a
representation. It is a being without a location in the material digital universe. Even if it
mimics real beings. It has a form reality through
- its presentation to and interaction with real users, or better, a community of real users
- its persistence in time
- (in physical world) its presence on a real URL.
Representations as such are
passive and dependent beings. They are "read", or "looked
at" when that is better that to a direct access to the represented being,
or when the represented being is not accessible. For instance, historical
beings no longer existing, or distant beings whose access in long and costly, etc.
They must not exert an autonomous activity, but for the case of
Representations are created in two cases :
- when S needs them for its own use ; if it is for future use, the representations are stored in memory, they become part of S knowledge, or "culture" ; in the normal cases, theses representations are compact and structured, in order to reduce storage costs and to facilitate use ;
- when S has to present things to other beings ; in this case, these representations can be short or, in the contrary, developed to cope for transmission errors or to conform the receiver needs or desires.
There are basically two
- from intrinsic nature
- from some sort of scanning or artificial compressing.
If the represented beings
are changing in time, their representations must conform to these changes.
There are several way to obtain this result :
- regular access to the being in order to obtain the new information
- knowledge of the laws of change according to time, and computation of the new values when using the representation
- integration of a specific clock in the representation, which will change in a way conform to the original.
In the last two cases, arise the possibility of errors, if the law of change is erroneous, or not sufficiently precise, or it unpredicted events affect the status of the represented being.
Metrics of representations: e.g. number of bits in the representation/number of bits in the represented
The simplest modes of
reconstitution are :
- a pointer on (the address of) the being itself
- a bit per bit copy of the being.
These two modes are
impossible or of limited use :
- by address : the being is far located, or does not exist any more, or not yet (it is a being to be made now or later)
- by copy : the being is heavy, possibly heavier than the totality of S itself. Moreover, for some actions/processes, the important lies in certain features of B, which would have to be computed each time.
Then it is appropriate to
replace a being by a representation
- shorter (generally much shorter)
- more appropriate to the actions involved.
Nota : see in 1.4. notes about analog and digital representations
General laws on representations ? Grammar, semantics.
2. Numbers, data and the infinite
Typical example of a basically analog de ice, but taking the (human) user to digitalize ; express a figure (here in practice with two meaningful digita). Nevertheless, such a device is sometimes used “analogically”, when the user gets some movements of the needle (rapid or slow move, oscillations…).
En salle de contrôle, voir Louis Richaud. En salle de contrôle, tableau de bord de la production. Informatique et gestion no 64 ergonomie, digital
User psychology (neursosciences) is fundamental in analog/digital HMI. The Sainte Odile Mount case.
It does not play the same way with purely machine systems, where the best is (more and more) to digitize the data as near as possible to the acquisition point, and to go back to analog only at the effecdtor level.
The infinite :
- the infinite sign
- ... or etc.
- while ( (0<1) == true) do
- recursion, automata
- Turing machines
- the real line R, the set of integer numbers
- the reflexion, mirror test, the feedback loop
- a circle is both finite and infinite
Proggressive abstraction of numeric signs. The zero.
Errors and approximations. If we are a in a navigation school, we can indeed compute the hearing. If we are in the real world, we have the following erro (or error brackers):
1) The tide is never (never) at the value given by the tables, due to:
- atmospheric pressur variability
- wind force and direction (up to 2m difference in Brest)
- in Seine vay, the tide wave is double; a first one runs on staitht line from (say) Barfleur to Le Havre. It is twinned with an interference who starts from Barfleur towards the South (towards Grandcamp-Port en Bessin) then towards Deauville. This cause a well known perturbation with a "tenue du plein" which is really useful in Deauville (lock por) or, as in Le Havre, where we have two PM maxima. Result : a PM which lasts longer than in Brest, followed by a diminution much more rapid than you can suppose
2) Generated draughts are variable according the above variables and the bottom depth and shape. 3) The lateral ships drift.
4) The theoretical declination is perturbated by the real declination, variable according to the magma below. For example, it reaches 8° in Blanchard Raz, which did'nt need that ( 10 nots + stumbling blocks + fog + random waves). The worst is the deviation on this place in 1955 (I think) with a iron loaded cargo (!) which stranded in the fog. And there are ancient iron mines below the ras. The EPR is built on an iron mine.
In our case, add the important unknown of arrival time. Houlgate is accessible only at full sea. Mut it is even more complex since, with the tide coeffcient, the currrent of draining, out of the small estuary is rapidly (1h15 ap PM) more rapid that my yacht motor speed.
That explains that I never brouhgt there my keel. But if you are around I cound try the performance.
Donc, pragmatiquement, le mieux est d'y aller à l'intuition, bien sûr fondée sur le cas théorique.
My answer: You say tha a simmle computation, based an angular difference from the Havre on the two beacons at Divers-Houlgate entrance,
but that you need "big data" to reach that. Whic coild problly succeed for an autonomous ship with a good GPS and, OK, sensors to adapt in little predictable meteoroligical conditions.
(voir Sea Hunter)
But, as you have that neither in your ship nor in your brain, you operate by successive approximations. Which are not insuitive by themselves, but for your personal experience (analog memory, neuronal network), concerning the navigation in theses places.
Numérique, numérisation, the French translations for digital, digitization (and scanning), are misconceptions. The French linguistic authorities have imposed "numerique" to translate the Globish "digital". That is a misconception. Indeed, the binary codes we use to store, process and transmit text, image and information in general are not numbers. They accept neither the structures nor (a fortiori) the operations valid on numbers. They are actually codes, ciphers. And, at the end of the day, the globish term "digital" is better than anyone else. So more (from a French standpoint), because it is built on a Latin root, digitus, finger, which provides a sound etymological basis for the intended meaning.
But analogy comes also at a purely formal level. By itself, the bit order, from weak to strong ones, is a form of analogy. Actually, the internal representation of integers breaks the analogy in order to enhance performance and reliability.
A number is not a digit. It may represented by a set of digits. But that is not always not necessary to use it or even to compute it, if one stays in the limits of formal calculus. (See Bailly-Longo).
N one integer defines one
Z elements are defined by two integers
Q elements are defined by two Z elements, hence four integer
R more complex
C complex numbers
The representation of a number may be more or less digital
Various representations of 8
huit, eight, ocho, otto,
7+1, 8 + 0,
15 - 5 , 15 - 0
2.4 , 8.1
80/10 , 8/1, (8*n)/n (n not equal to 0)
(8,1) (floating point)
1D vs 2D ?
the main problem is :
- convention, agreement on assembly structures
- way of operating of our brain ; or the rational operation, more 1 D than else
- reflexivity (part of convention)
We think mainly of images as bitmaps. But vectorial and other formal representations of beings may be major, even in the expression process. Though, of course, in the digital world everything finally resolves in bits. But let us not confuse the bits of a formal text and the bits of the resulting image.
One can say that images are bitmaps; other ones are models. But then what of vectorial images?
What does a bitmap brings
as knowledge ?
1. Some global features and parameters, with simple additive processes : average color, average luminance, etc. Variety also may be roughly defined and comuted.
2. Distribution of values into the page: upper part lighter, centre more saturated See [COQ95]
3. General parameters as resolution, size, histograms
4. Pattern detection and recognition?
The term model may be taken as
1. just a synonym of
representation ; possibly simplified (comparable to a dummy, a prototype, a
2. a “system”, that is a set of coherent assertions, generally with some quantitative relations ; equations are a particular kind of assertions
3. a “canon”
Mainly in the second sense, the model is said to explain the being if it lets not only describe at any time, but also predicts the behaviour or the modelled being (possibly, imperfect prediction) with a precision for the needs of the user. A being is said "explained" if there is a model of it, sufficient to simulate and predict the evolution of the being in time.
The length of the model is the KC (or at least a valuation of the KC, Kolmogorov complexity) of the modelled being. Normally, a model is shorter than the object it describes.
Then we can define the
yield of a model as the ratio: length of the model/length of the modelled
being, the quality of prediction taken into account.
See here the “simplexity” concept. Eg. Delesalle.
In some case, the performance of a model may be evaluated automatically. It can evolve. There are different cases, depending on the availability of external information on the domain modelled. Or worse, when an enemy attempts to conceal as much as hi can.
??? If the model operates from a source, a critical point will be the sequence length. With possibly several nested lengths (e.g. the 8 bits of a byte, and the page separators. Burt there are other separators than those of fixed length systems. Find automatically the separators, etc.
. Representations are
object referring to other ones, and making it possible to these ones in a more
economic way. In the classical discourse about computers and information
systems, the reality is outside and representations value depends on this
world. In DU, a major part of representations stand for themselves and may be
used as such.
The classical kinds or representations are designed according to the human sensorial system. In DU, this is not pertinent as long as S has not to interact wih humans. Then, core features and behaviours matter more than appearance of spatial shape.
The key issue, when using a representation instead of the represented object, is efficiency... or simple possibility. Beyond the address/type which gives the basic access, the representation may be built "from outside", with sampling/scanning/rastering systems, or "from inside", wih vectorial representations or other formally linguistic representations. The traditional opposition of geometry to algebra, as explained for example in the Weil/.... debate, is based on human different apprehensions of the world. The algebraic one transposes directly in digital universesthrough formal languages (indeed, a the digital universes were borne largely out of the algebraic effort), but the classical geometrical view is no directly digital, and refers to deep "gestalt" or "gestual" features of human being and in particular its neural systems. Geometry, from that standpoint, could become more digital if these neural mechanims are, in the future, sufficiently well described to afford a linguistic formal description.
For a given system, R is a reprensentation of B if S is able to re-constitute B starting from R.
The simplest modes of
reconstitution are :
- a pointer on (the address of) B itself
- a bit per bit copy of B in S.
Frequently, these two modes
are impossible or inefficient.
- by address : B is far located, or does not exist any more, or not yet (B is an object to be made now or later)
- by copy : B is heavy, possibly heavier than the totality of S itself. Moreover, for some actions/processes, the important lies in certain features of B, which would have to be computed each time.
Then it is appropriate to
replace B by a representation
- shorter (generally much shorter)
- more appropriate to the actions involved.
Let us says first that a complete reconstruction of B is frequently pointless, if even possible. Then, a simplified image will do better, as long as it is sufficient for the actions to be done.
Then, the construction of representations may be described generally as a division (or analysis), which demands an appropriate analyser (sometimes "parser"). Symmetrically, the generation of an object similar to B will be a "synthesis", using various kinds or generators.
7. Repetition and concatenation
If B is a long sequence of
concatenated identical strings (possibly the same bit), B may be reduced to
that string times the number of repetitions.
A more general case is the concatenation of words taken in a dictionary, and in general chosen in accordance with a set of syntactic and semantic rules.
- B can be deduced from a representation R (stored in memory, for example), and some rather easy geometrical operation, or filter.
A lot of such operations can be listed, and, for a given S ( Roxame for example), constitute a generation system.
The full meaning of the work comes from a combination of the elementary meaning of elements (words, elementary forme), of the structures, and the wey they convey the meaning intende by the builder.
(And dont forget that dictionaries as well as grammars are built with bits).
Theorem of losses during generation.
8. Self representation
Representation of the self
- Representation of a being “inside” this same being.
That is a major point, as
all the "self", "diagonal" patterns. It points to the
"conscience" issue with its inherent paradoxes. There are several
- representation of self with its parts (mirror, homunculus), which may physically be done with a mirror and a camera, for instance ; more digitally with type and constituents
- representation of self in the environment, beginning with its own address and the beings round.
9. Autonomy of representations
Variable (in the computer acception) is a form of "meta" : the individual featured by its trace in the information system. And, when he dies, that trace is all what remains of him. The progress of IS is also a progress of each of us trace, indefinitely, with all these files, up to "behavioural databases".
Lamartine says to Nature :"vous que le temps épargne.... :" (thou, that time spares..) mais time spares the verses of Lamartine, who takes to us and forever (perhaps) the memory of his love, when the waters and rocks of Chambéry lake are dumb and deaf to day as well as in the time of Lamartine.
Once elaborated, the representations open the way to a generativity which go beyond their original scope. That is shown by Michel Serfati about the variables and exponents by Viète, Descartes and Leibniz.
Expansion, compression and the main memory units.
1.5. Sampling and coding
Digitalization of representations has two complementary sides : sampling and coding. For pictures, they become pixelization and vectorization, which can be traced far back in art history as pixelization and vectorization. We could call the former “material digitalization”, and the second “formal digitalization”. This opposition is rather parallel to the analog/digital opposition.
Digitalization of images 1. Pixelization. From left to right (extracts) : Byzantine mosaic in Ravenna (6th century), pointillist painting, “Le chahut”, by Seurat (1889-90), and “La fillette électronique”, by Albert Ducrocq with his machine Calliope (around 1950).
Sampling/pixelization: Byzantine mosaics deliberately use a sort of pixel, tesserae, to create a specific stylistic effect . The impressionism, and even more neo-impressionist pointillism have scientific roots. Computer bitmaps emerged in the 1970’s, with forerunners such as Albert Ducrocq , handmade paintings generated by his machine Calliope (a random generation electronic device using algorithms for binary translation to text or image).
This form of digitalization stays more “analog” than the vectorial one. The “cutting” operates at the representation level itself. The conversion requires comparatively few conventions, but for the raster size and the color model. Within precision limits, any image can be pixelized, and any sound can be sampled.
Digitalization of images. 2. Vectorization. From left to right: Guitar player, by Picasso ,1910, Klee tutorial 1921, Schillinger graphomaton, around 1935.
Coding/Vectorization is a more indirect digitalization, since it creates images from elementary forms, as well as from texts, grammars, and the particular kind of text that is an algorithm. Here also, roots go deep into art history. An important impetus was given by cubism. The Bauhaus tried hard on this way, see for instance Itten  for color and Klee  for patterns. A first explicit vision of vector automatic image generation was given by Schillinger [Schillinger]. Incidentally, Schillinger was mainly a music composer and compostion teacher, and of course, digitization and music went along similar ways.
In music, the main coding system today is Midi.
The cutting operates not directly on the representation, but in language objects. The conversion requires more or less detailed conventions. It may be hierarchized and takes finally to language. Any analysis (and symmetrically, any synthesis) may be considered as a coding (and possibly decoding).
Not every representation (picture or music) may be appropriately coded, or it may imply considerable loss, specially if the code is used to generate a “copy” of the original.
The two ways of evolution have merged in binarisation in the 1950-60’s, as stated in the seminal paper of Von Neumann et al. , “We feel strongly in favor of the binary system”, for three reasons :
- hardware implementation (accuracy, costs),
- “the greater simplicity and speed with which the elementary operations can be performed” (arithmetic part),
- “logic, being a yes-no system, is fundamentally binary, therefore a binary arrangement… contributes very significantly towards producing a more homogeneous machine, which can be better integrated and is more efficient”.
What about the
"analysability" or "divisibility" ?
- in DU, any being is fully divisible into bits (and most often than not, for practical purposes, in bytes)
- sometimes a being may be fully divided by another being, and then considered as their "product". Examples
. a text (in a normal language, natural or not), is the product of the dictionary times a text scheme
. a written text is the product of a text scheme, a language, and graphic features: font etc.
. a bitmap with its layers (R,G,B, matte...).
The concrete/abstract distinctions stems from the basic assertion scheme : there is that.
The concrete object is :
what is there.
The abstract object is : what is that.
“That” is a term, or a concept. It has (somewhere in DU) a definition or a constructor.
The extension of
“that” is the set of concrete objects corresponding to that
definition (or so built). Or possibly the cardinal of this set.
The comprehension of “that” is the definition. It is also a DU object, then may be seen as a string of a definite number of bits.
Thesis : the definition or
constructor does not, in general, describe or set the totality of the object,
everyone of its bits. The definition may be just a general type, of which a lot
of bits will be defined by other factors. An object may be recognized as
pertaining to a type, since it has the corresponding feature.
Among other consequences, that lets space to actions on this object by other actors or by itself.
If a concrete object is
strictly conform to its construction, it may be said “abstract” or
“typical”, somehow, or “reduced”.
“Reductionism” is the thesis that the whole of DU may be obtained
from its definitions.
The ratio of “type defined” bits over the total number of bits of the object could be taken as a measurement of its “concreteness”.
Quotations : An abstract system is made of concepts, totally defined by hypotheses and axioms of its creators (Le Moigne 1974, after Ackoff) . Abstract beings have a limited and definite number of psychemes. (Lussato, unpublished1972)
Thesis : DU itself may be defined, but of course very partially. That may be controversial. The address of DU is somehow the “zero” address.
Thesis: When comprehension grows, extension decreases, since there are less and less objects corresponding to the definition. It comes a moment where there is only one object corresponding to this definition. For instance “world champion of chess” has only one object at a given time. Or there can exist no object answering the definition, even if it bears no contradiction.
Thesis : If there is only one object, if the extension is 1, then one can say that the abstract and the concrete object are the same. (to be checked).
There is a kind or
- when abstract grows, concrete emerges out of it
- when concrete knowledge (measure) grows, abstract must become more finely cut, then more massive.
An example of concrete beings emerging from abstract descriptions : segment one (from the "one to one" marketing language), "the one who...". . Evolution of information quantity about a consumer. At start, the customer does not even exist in the information system. It records only anonymous (or difficult to group, as with insurance contracts) operations. Then there is growth not only to the individual customer but to its environment.
See identity : what makes an object unique.
Absolute concrete may be taken as a sort of limit, of what is accessible by all beings in DU (and possibly without conventions ? )
Thesis : Codes without redundancy entails hierarchy loss. Ant a wrong bit spoils the whole system, if there are non longer stronger and weaker beings.
Note : a lot of developments here can be found in the literature about Object Programming. See in particular [Meyer]