Common nouns | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Index Berger's Works
Proper nouns A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z| MY IDEAS

Ops

That a proprietary (and exploratory) concept of Pierre Berger, elaborated to measure the "power" of digital beings. Ops has a bit.bit/time dimension, and applies to processors, memories and communication lines.

1. Definitions

We propose a unity called "ops", dimensionally coherent with bits2/time (in seconds, or an ad hoc unity). We adopt the following definitions.

. For memories, the number of ops is equal to the capacity times transfer rate. Example : a 650 megabytes CD-Rom, equivalent to 6 billion bits, is totally read in ten minutes, that is t 600 seconds, transfer rate 10 megabits/sec, global power 60 Gops.

. For processors, we take as numerator the square of the reference word length (32 bits for instance), times the number of instructions per second. Though rather artificial, this measure gives a correct weight to the word lengths, which impacts instruction variety as well as operand length. Example. A 32 bits, 400 MHz processor : 32.32 = 1024, 1024. Global power : 400 Gops

core.jpg

. For communication lines, we multiply the band pass (in bits per second) by the length of the line (in bits, since speed is practically constant and equal to light speed). For a given bandpass, the spatial distance between successive bits is constant. Here, the band pass plays a squared. Example : a phone line of 100 km, used at 54 kbits/second. At this speed, there are around 60 kilobits distributed over 300 . Then one bit every five kilometre, or 20 for 100 hundred. The, the global power is 60 000 .20 : 12 mégops.

In this way, we can hopefully add the power of different components. But sensible computations, in a given system, must include limiting combination factors.

We can understand the ops measures as an organic complexity (here, hardware is seen as organs for the software and data), or as a functional complexity (describing the function of systems of which organic complexity will still to be assessed, for instance by production costs, CAD programs, etc.).

2. Gate/truth table metrics

An example of model. We build a truth table with, on each row, first the values of I and En then the corresponding values of E and O. The length of the line is e + i + o.

There are 2e+i lines. TV, The total volume of the table is the product of the two. We shall take for ops, simply  (e+i). (e+i+o).

If several gates are combined, NP their number, we shall study the ratio TV/NP, which may be taken as an approximation of each mounting yield.

One NO gate. There is no state value. Then  TV = NP = TV/NP = 1.

iiii (we could add a "yes" gate)

kkkk

uuuu

In the "gate model", RD is the number of gates over which a clock is needed.

How to reconcile that with (one dimensional) DU ? One elementary gate = = 3 bits ? But, yes, we consider it is a being, the function of which is defined by its type. We factorize the issue of how it works.. as for COM.

If the model includes memories, memory of n bits : hence TV = 4m + 3

To come back to ops, we must introduce times. That convenes rather well with the ops counts, since a truth table is a square.

3. Gate metrics

Gate/ops. In the simplest case, one bit, one ops.

jjjj

For one bit per cycle, we still need much more gates than bits ? How many (we talk about memory). If the transfer rate is one bit per cycle, then ops = capacity.

For the writes, we take the addressing/decoding, then collect the results. We can organize the memory in pages (more simple addressing mode). There are also the autonomous processor emitters, in some way.

hhhh

The gate metrics reduces rather easily to ops, with an elementary gate having one cycle per time unit, representing indeed on ops. Then we have to find the 2 exponent somewhere.

4. Combination of systems and power computation

Combination of systems, relations between their L and ops is not easy. Difference and complementarity of devices, parallel and series architectures will play.

The computation seems easy for serial mountings, if the series is unique (series of identical gates, or negation).Easy also if purely in parallel. Beyond, it gets rapidly much more complex !

If the processor has a large memory , it can rotate over oneself (my God, what does that mean ? ). That relates to the autonomy problem (the longest non-repetitive cycle).
The input rate (e.g. keyboard) conditions the passing over this cycle, with random bits input, i.e. not predicable by the processors

One more difficulty : basic randomness of DR, negligible in general, but...
Ops of printer/ops of keyboard.

Printer (PR) instruction-length * frequency
The keyboard (KB) sends only 8 bits, on a much lower frequency

since ops(KB) << ops(PR) we can say that KB has little influence... and then it is enormous

under Windows, a the start, all the work done, takes several minutes, not negligible in terms of KB ops
but, after that, PR does no longer work, and waits for KB.
PR becomes totally dependent

But we could have loaded a program which did anything (animation, screen saver...)
the length of non repetitive loops should be computable

Other ideas to compute combinations

1st idea : fixed string emission (for instance E$),
concatenated with a bit extracted from the E$ string
then, non repetitive length (l(E$)+1) .(l(E$)), order (E$)2

2d idea : start again, placing one bit of E inside of itself
and in succession in every position
then we have (E$)2 * E$, that is (E$)3

3d idea : start again (2), but with sub-strings of length... from 1 up to (E$. We have (E$)4

4th idea : emit all the permutations of E$

For all these ideas, we must take into account the fact that E$ may contain repetitions
Indeed, from the 1st idea above, we admit only two different strings... and we are taken back to Kolmogorov

K(E$) (the smallest reduction) and Ext(E$) the largest extension without external input. In fact, this length will depend on the generation program

We could have L(Ext) = f(LE$, L PR) with the limit case LE$ = 0. Then we would have a "pure" generator

(L(PR) = program length

nnnn

(something like the vocabulary problem)

if E fixed and l(E) = l(PR) + l(E$

gggg

with an optimal distribution. But L(PR) is meaningful only if we know the processor power.
We need third element L(comp) L(PR) L(E

Processor-keyboard combination

The global ops computation must include "dependency factors". The result will be in between
- ops the smallest
- product of the ops
- sum of the ops

If constraint factors are weak, the sum of ops will mean little if one of the components is more powerful than the other one : (a+b)/a = 1 + b/a

Screen/processor combination

Here it is much more simple. The smaller is the most important, with the constraint included in the graphic card. But L(PR) gets meaning only if the processor power is known. We need a third element
L(comp  L(PR) L(E)
If constraint factors are weak, the sum of ops will be of little use if one the two components is more powerful than the other one : (a+b)/a = 1 + b/a

Memory/processor combination

PR-Mem (exchanges)

1. K There is no external alea, we deal with the whole.
2. Mem is not completely random for PR, since it writes in it.

That depends rather strongly on the respective sizes

if Mem >> PR, it rather like a random
if Mem << PR, no meaningful random

In between can be developed strategies, pattern, various tracks. We would need a reference "PR", then work recursively.

There is some indifference TV/NP. The relation may be very good if we take the good structures. Then Grosch and Metcalfe laws will apply (and we have then a proof and more precise measure).

Conversely, if we work at random, the relation is downgrading. There is a natural trend to degenerescence (DR, or Carnot laws).

Conversions:
- we can define the gate system necessary to get as output a defined string,
- we can define a gate system as a connection system inside a matrix (known, but of undefined dimension) of typical gates
- we can define a system of gates of communication lines through a text (analogue to a bit memory bank

Then we can look for minimal conversion costs, a way of looking for the KC. The good idea would be to look for a KC production per time unit... not easy...

5. L and ops

Given
- the cycle duration of the being
- the size of I and O
- the necessary power to compile the E and O functions,
... see how that relates to L for a combination of systems.

Note : ops is purely H, but if we add some kind of limitation on energy for gates, that could be a way to take P into account.

6. Growth

6.1. A model of self perfecting being

In a very simplistic model, S1 aims to get the maximum out of its I. At start, it reads only a part of it, due to limited processing power. Then, it detects regularities in this part, can limit its reading and extend its reading zone.

Silence.jpg

If the inputs are perfectly white noise, or perfectly random, or more exactly, if S1 does not succeed to find any useful regularity, it will be impossible to make better. Irreducibility is a feature of randomness. But we can imagine some cases where this kind of "learning" is effective. (We assume that the input flow is larger than what S can compute)

S supposes that there are constant bits in I. It examines each bit, separately. If one bit is the same during four cycles, it admits that this value is constant, and begins to scan another bit. It there are changes, it continues to read it, or considers this bit as uninteresting, and drops it.

S goes this way up to exhaustion of :
- its storage capacities to compare successive values of bits
- the width of the I window.

It is easy to design more and more sophisticated mechanism of this kind. With pattern recognition, learning in the proper sense, the building of laws, etc.

6.2. The (n+1)th bit

The exciting moment is when the model grows on a significant scale. In other words when, in some way, it questions itself (rather metaphoric). A digital view of the Hegelian dialectics, "steam engine of History".

A logical foundation : the only meaningful growth (or growth of meaning), goes by adding new bits. By design, they are not determined: if they could be determined form known bits, they wouldn’t matter. Then we have the daily play of interaction with the world and other humans.

That new bit, and more generally the external beings, to have a meaningful value, must keep into existence. Then S must also respect the physical matter which bears them, the energetic conditions of their existence.

But S must also want something of its value. And its place inside its E (low end, high end, place in the model). This bit here, and generally this being in those circumstances, in a specifically defined space.

Choose the good obstacle to jump over, for my horse, for me, for my children or students. Not forgetting the limits of possible, but
- taking a risk, well calculated.
- with a good return..

The (n+1) th bit is a negation of all the first n bits. It is the antithesis. But as so it is just the precedent string plus one bit, passing from 0 to 1 (one could take the inverse solution). Or more exactly, the negation is the negation of ALL the existing strings, adding one new bit ?

But the pure negation taken as such, is of poor interest. Integration in an expression is the real task. Then everything must be cast enough.

A bit negates its environment, and at the same time settles it ("me pose", if I am this environment).

In a model with recurrence. In a being of nn bits, how many different being can we place ? A threshold is passed at n.2n. Then one can carry on a little, up to the moment when we must use n +1 bits .

This model supposes that S aims to optimize its performance, in this case the filling.

This threshold may be said "where the model size must be increased". But after some offset, a sort of overfusion, with an intermediate region of indifference, or gap. .

Theories, in general, pay little attention to these zones. They are in general skipped over, calling to large numbers, scale presentations and talking "at a constant factor".

Between n.2n and (n+1).2n+1, there is stability.

Between n.2n and (n+1).2n+1, there is stability.

n

2**n

n.2**n

(n+1)(1 + 2**n)

Diff

Diff/n

1

2

2

6

4

42

2

4

8

15

7

3

3

8

24

36

12

4

4

16

64

85

21

5

5

32

160

198

38

7

6

64

384

455

71

11

7

128

896

1052

156

22

8

256

2048

2313

265

33

9

512

4608

5130

522

58

10

1024

10240

11275

1035

103

 

 

 

 

 

 

20

1048000

20M

21M

1M

50K

Then, when n becomes large :

Then, when n becomes large :
- a little increase in n is sufficent to double (or augment the 2nd power) the number of differente beings for a same necessary increase in capacity,
- the region where its profitable to stay with n bits is narrow (and of little interest).

Hence (product laws)
- beings (blobs) cannot be "played" with
- replacing beings with shorter ones gives slack, but uses the place less efficiently.

There is something of the sort in the Mendeleyev atom table: at start, the combinations are poor; the more we go down in the table, the longer are the lines.

>nplusonebit.jpg

When the code nearly reaches saturation, there remains no suppleness (to add some new beings), nor security by redundancy (note the proximity of these two concepts). Then, it may be profitable to jump to n+1 bits even before reaching the threshold. (A lot of things about this "slack". For instance hard disk use).

That, of course, depends of the processor costs.

Let us suppose that S has to represent 1000 beings. That means 10 bits, with a small loss (24/1000, or 2,4%). If S cuts into two features, for example one of 100 and one of ten values, it needs 7 + 4 bits, that is 11 bits, with a global capacity of 2048 beings. The separate slacks are of 12 and 60%, the combined one of 105%. But, if the features are smartly chosen, it may profitable otherwise, if at least one of the features is meaningful for a sort or the application of some pricing mode.

Another case : a memory of 10K bits can host 1000 beings of 10 bits. If S factorizes with a first feature of 10 values, needing 4 bits, it remains (10K - 4x) bits for x objects. Then, when x grows, the 4 bits take is of low importance (to be checked).

In case when S is not sure of the available capacity nn, then the passage to a greater number of its would be an acceptance of risk.

An interesting problem (this part should be eloped in "structures") is what we do after a "Hegelian" new bit. How this global negation of the entire S (antithesis to the S thesis) will be worked on after
- adding a new divisor the being ; a separate feature, when that bit is at the new value (lets say 1, as opposed to the normal implicit 0's) ; and possibly the construction of a totally new structure for the beings begging so ; or that partly in E.. that’s radical
- transferring it to one of the present divisors, or sub-part of S, as strong bit, changing the meaning of this divisor or feature; it may be radical, but limited to the subpart
- putting it on the high part of a number
- putting it on the low part of a number (we have just made a refinement on some quantitative parameter)

6.3. The next bit, where to place it ?

That supposes that there is a choice, a gap, an opening "after" the first bit, an ordinal at least, two positions related to each other. It is "engagement". If a being is launched, thereafter the being must be eaten, S must stay on this sensor.

Every time I add a bit, it is either structural or precisional :
- a structural bit opens a new field, e.g. from fixed to variable length for a variable
- a precisional bit develop or extends a new field, or more precision on a quantity a new bit in a raster.

These roles of the new bit is defined in a processor (hardware, at limit) or some part of a program (e.g. a header)

The "internal" word, 9x8 bits, 72 bits (per character). To draw it, it would need 9 bits and a pointing mode of the one which is inside the rectangle of nine.
For extension, 2 bits, which determine the 3rd.
To compute a curvature radius, 3 pixels (only one circle passes through three points).
It will perhaps not give a possible circle, that is with a centre on a pixel, a definable radius with the number of bits allowed by the software and passing actually through these three points
More precisely : a circle drawing algorithm could find at least one circle, all the circles, if one circle exists... including these pixels. It is a very different problem from the Euclidean one

7. Varia to be edited

Ops and noise
The proportion of wrong bits on any operator.

Is it possible to compute a circle of which the center is not a pixel ? Yes.