Manfred Wolff-Plottegg txt for Olafur Eliasson catalogue
|Fake||Quick and Dirty / Fake Architecture||Forward Planning||Neuronal Architecture Processor||PS. for Olafur|
(Link to Basics on
The fake theory is based on the recent observation that a fake is a media related phenomenon, which today in the world of computers and of the www comes to an ultimate climax.
The nature of the new media makes a fake different from forgeries, deceptions, lies, April fool´s jokes etc., which traditionally want something, transmit something. A fake is not conceived in terms of content, therefore isn´t, does not have, but offers anyhow.
It is an obvious and generally accepted phenomenon that "persons" in a chatroom on the Web develop several and varied identities. These IDs can be changed rapidly and according to whim; sometimes they are partial, extremely exaggerated or similar. The classic concepts of personal identity and integrity are abolished. From now on, split and multiple personalities are possible and, at least on the www, we are now allowed what up to now was frowned upon "in real life" (IRL), what was forbidden socially_scientifically_artistically. This development and differentiation is an indication of the characteristics of a fake.
It follows that the clarity and lack of ambiguity required up to now are being replaced with complexity; that the historical cultural unity of the person_identity_integrity is being erased1.); that the reliability of an opinion, of the individuality, of a viewpoint, of a character, of handwriting are being relativised; that all rules such as "one man one word" or "what I say in no uncertain terms" "and I stand by my word" are now ending up in the trashcan. "Read my lips!"
The underlying rule of rules on identity and coherence was the mandatory unity of space_time_place (Greek drama). As if we were trapped in a fold of time, this unity is even today binding: the alibi, the elsewhere, the proof, that the accused was in a different place than the scene of the crime at the time of the act and therefore can not be the perpetrator. This becomes obsolete in the age of dislocation and telemanipulation.
The theories of morphogenesis and behaviour patterns are of course subject to the same change: the theories of representation, the Pygmalion syndrome, the transposition of expression, the total work of art, the world improvements, the derivative procedures, static determinism etc.etc. Of course in architecture the old Vitruvian rule of unity of function_form_construction corresponds to this.
In fact there were always, never the less, in the arts and sciences, attempts to escape these commitments, to escape the prison of space, the limits/transitoriness of time, the obligation that something must be just so and not any other way. The developments in the sciences and especially the dynamics of the 20th century explicitly require the removal of old rules and parameters. The rule of the "body" is erased by virtuality, the stylistics of the 19th century, the functionalism of the 20th century are being replaced by system controls; that is to say away from the product (hardware) and towards the metalanguage of control (software): the developments "from expressionism to actionism, from constructivism to deconstructivism, from determinism to relativity" 2.) were brought about by various disciplines. Formalisation (methodology) replaces the content approach, the sense of mission. The trail runs from pataphysics (via surrealism, fractal theory, fuzzy logic, chaos, complexity and "real virtuality") to the fake. The change of paradigm from the monopole of unity was brought about quite decisively by the new media. The differenciation of time_place_action, the simultaneity of presence_absence_dislocation, the coincidence of subject_object as system variables are at the beginning of the 21st century no longer delusions as before. The virtuality is real 3.). We emancipated ourselves: shift, beam, morph, zoom, fake!
So just as the potential of "real virtuality" throws a new light on "IRL", the knowledge and production of fakes allow us to see what was perceived until now as the "true" non-fake-world in a new way. With fake it is not a case of manipulation of reality (as with deceptions etc.) but rather the production of new combinations of reality. Primarily the "content" is erased or shifted. A major step in development lies in the detachment of information from body_object_place as information carrier. The flow of data in computers / www dissolves identities (autochtone, genius loci, ethno, etc.) and so provides a further basic principle of the fake: placelessness - if something is somewhere where it should not be, it has potential to be a fake. And if information as a whole floats freely, a fake is not far away. Capricorn One - .....and no man on moon?
This prototype of shifting can of course also be found in the dimension of time and speed. Dislocation corresponds to Quick&Dirty: it is not that the "rightness_truth" of information that counts, but the speed (see the tabloid press or fuzzy logic). The subsequent outcome becomes the primary event: There is no original; even the attitude / intent does not exist.
Quick and Dirty / Fake Architecture:
The world of new media, especially that of the computer and the www has always been virtuality. The tradition of architecture has always been the place and the identity (identity of place, defining/shaping a place/identity).
Even in the "real world" of architecture, shifts have been experimented with for a long time now, eg. just after the invention of perspective (anamorphoses, anagrams, Borromini / Palazzo Spada, illusionist architecture/painting, etc.). As early as in ancient times the appropriateness of material 4.) was in doubt, but we grumble about this less than about PVC imitations of wood. The rules of architecture such as readability (eg. of construction) were also interpreted differently in Gothic times (Trento Cathedral); the Potemkin City as forerunner of postmodernism, but certainly not of "form follows function"; "Coherence and truthfullness" are yet again erased: the fake design not as pretence of reality, but as invention does not become "true", it becomes a new reality.
Finally, when generative architecture developed in the late 20th century, the previously cultivated anthropocentric approach was replaced by external automated events. The computer allowed architecture to get rid of the former intention to represent world-views, and the most essential characteristics of an up-to-date and modern architecture were formulated. Neither the architects handwriting trademark as an individual signature, nor the purpose as fulfilment of an aim, nor predefined functions as metaphor for teleologies exist any more. The longlasting predominance of form, body and material has been overcome; architecture is no longer a signifier; devorced from retrospection, relieved from habits, personal desire and projections, it does not pay tribute to the powerful any more. Being no longer derivative, architecture accelerates, becomes autocatalytic.
Within the environment of the CPU, the linear, singular and visual logic of orthogonal projection and perspective are obsolete. Binary lines are strokes devoid of any content, the namelessness of lines leaving them without any functional purpose, whereas the CPU supplies the vocabulary of forms. The binary house is complete in the CPU, without location in an environment without point of view, a site without standpoint. For the eye, the presentation is immediate, all opened images are equivalent, the CPU condensing the duration of time into simultaneity. The binary house itself has no dimension, it is simultaneous & timeless, non-hereditary, abstract, dematerialized architecture. The plan is dead, perspective is dead. Theres no more point of view. The flow in the Net defines architecture as instantaneous and placeless.
By browsing in the Net, architecture is further accelerated: as the Net is characterised by rapid access to an enormous amount of data stored side by side which are non-hierarchic, quasi equivalent, architecture manifests itself by browsing or surfing. In conventional thought, each design step was a stage of development, an improvement on the former step. Whereas browsing, because of the huge amount of equivalently coexisting data, now relativizes the illusion of "quality", the idea of truth and thus also the fiction of improvement. The modern architect is no longer an idealistic do-gooder.
The aspect of rightness is abolished and the architects know-all dogmatism has no ground any more. Subsequently, neither appropriateness of material nor of craftsmanship nor of construction exist any longer. As a criterion, appropriateness is now replaced by speed, and as a result of speed, precision is replaced by superficiality. For this ever faster speed and superficiality, the theory of fuzzy logic has been developed. This means that there should be more generosity. A good architect is a superficial surfer.
Static models of architectural production have run out. Instead, intelligent, interactive surfing in system processes has become important. The formulations of architecture algorithms in the Net will not be aiming at a better world or any other content or ideology. This architecture without teleology will be autocatalytic, algorithmic and emancipated: quick & dirty.
Speed allows neither time for fundamental analysis nor for checking for correctness (which becomes fiction anyway), nor for questioning the sense, the teleology. As preliminary assumptions, some parameters arise: a fake is deliberately against what is generally accepted, does not need an analysis, does not wait for changes by evolution. Speed makes a fake climax, but it is short-lived, creative and quick - has esprit, but no identity or authenticity.
And this speed guarantees dynamic developments. Planning has always been determinism, designing something today to be built tomorrow and to last as long as possible (today argued with the term "sustainability"). Now we have the tools to escape determinism.
The aim and the product of any architectural transformation has always been the adding of an object, of a form, of aesthetics to the world: something new to the old, something artificial to nature, something domestic to the landscape. So that the world afterwards becomes different from before, "better" according to the corresponding world improvement theory, which presidents speak about at inaugurations. The interesting thing today is that objects, forms, conditions are no longer transformed, but that the "principles of design" are themselves transformed. From object planning (1st order) to the metalevel of systemplanning (nth order): from product to tools, to machine tool, to know-how. The successful architect of the 21st century will simulate and control processes not as world improvement but rather out of the pure joy of designing quick & dirty fakes.
warning: a long slow fake becomes "true" and must immediately be re-faked!
The method of forward planning never looks back, it accelerates architecture. Because architecture is a slow medium, which did not accept the major scientific changes in the 20th century, it is even more urgent that new browsing systems become part of our design methods. To accelerate architecture, design must be dynamised, the design process simulates/stimulates real-life procedures.
The forward-planning method is not (as is usual) linear from the first sketch to the "finished project", from one scale to the next bigger scale - where the finished project differs, if at all, from the beginning only in terms of details. The forward planning process is modifying permanently - not in the sense of correcting errors or of improvement of the world, but in the sense of permanent changes of surrounding variables, growth or changing of functions. As soon as something already known appears, it must be changed.
As seen from the point of view of theory of architecture, the rigid images (object, proportion, aesthetics) of traditional architecture become abandoned and shifted towards the control, steering of (urban, architectonic, building) processes. Building science is not understood as object-related (function, structure, typology) any more, but as mode of procedure, as design method. The function of "the function in architecture" is not the object / form any more, but rather the process. The object-like built form is not the aim, but might be the result. Buildings are changing (ref. Steward Brand: "how buildings learn") and the history of architecture confirms: after the traditional building_body_architecture (volume, mass) and subsequently structural architecture, we are nowadays designing information-architecture. Built architecture becomes the flesh for the wireframe-model of reality. The theory of mapping_merging_morphing_faking different electronic, physical and social spaces together as a new "site" of architecture is the basis for the concept of virtual&real-life space. Thus - after the rigid, deterministic design of definite conditions, and after the concepts for the design of developments - the transition to an architecture of process-design is formulated and thus the transition towards architecture-algorithms, towards autocatalytic and genetic architecture takes place.
These dynamic events are simulated by the methodic procedure. Instead of the usual settings (program of functions, location, site etc.) algorithms for design are provided - and naturally are to be modified while the work is developing.
A straight forward design project demonstrates that it is independent from the site/place, the location may be changed at any time/ rapidly. It is free from functional interdependences, ie. a project can be connected, overlaid, moved, intersected, merged, displaced, etc (ref. game of life) with others, thus modifying the preliminary design.
After this merge/morph, the project is treated with the factor of "growth". What is the effect if particular components of the design are growing? What developing potential lies in the design? Can it be treated with the process of growth? When does it lose its shape and become something new? Thus the center of interest is not a detached singularity (auratic object, charismatic demiurg), but a dynamic interchange.
Neuronal Architecture Processor 5.)
In a setup for an installation we experimented and demonstrated how to get rid of one´s own creativity, How to generate architecture directly out of neuronal spikes!
Processor 1 simulates 3 biological neurons. The "spike train" output is transmitted to Processor 2. A script, simulating a "DNA-transcription", transforms the incoming spike trains into data sets for 3-D solids: by autolisp the "genes" of the spike train ("DNA") are transformed into autocad commands. Thus the incoming spike-trains permanently generate new solids.
This installation is an experimental set-up to demonstrate a principle and to prove our thesis on digital creativity. The principle is the following equation:
A sequence of pulses (spike trains) in biological organisms = binary streams of signs (bit strings) = data (coordinates, vectors) interpreted as geometric bodies (solids). This equation brings together three spheres of reality that are traditionally considered as distinct:
- the world in our head: information, imagination and ideas in the way they are actually coded in our brain
- the world of digital data processing, represented by binary streams of signs as communication vocabulary between digital processors
- the world of generating new images / spaces / architecture
These three spheres of reality are brought together under the common denominator of information processing. Thus the installation demonstrates that the generation of images / spaces / architecture must no longer be perceived as anthropocentric / expressionistic. Through the use of the computer it is possible to emancipate oneself from established patterns of behavior, evalutation and production.
The installation consists of two digital processors that communicate with each other, the output of each of them is projected by a beamer.
Processor 1 simulates the atomic units for information processing in our brain: circuits consisting of nerve cells (neurons) that are connected by synapses (shown as blue triangles). The biological neurons that are simulated here communicate with other neurons by sudden increases in electrical potential (spikes): at certain points in time, the voltage rises instantaneously in the soma of the neuron (demonstrated through color changes of the neuron), and subsequently falls equally quickly, approximately one millisecond later. These sudden voltage changes are transmitted to other neurons through axons (demonstrated in the installation by moving white bars that represent spikes). This constitutes an original version of the digital codes used by nature in our brain: every single spike is indeed such that at any point in time there is either a spike or no spike, - there being no such thing as a half or quarter spike. Information is encoded in the distances between the spikes, corresponding to the distances between 1's in a bit string.
Every neuron in our brain receives spike trains as inputs, i.e. sequences of spikes in time, sent by other neurons. In particular, sensorial perceptions e.g. all built forms visually perceived by us are coded in the form of spike trains before they reach the neurons in our cerebral cortex. In our experimental set-up, the neural circuit receives 4 spike trains as input. Spike trains Bio 1 and Bio 2 are real spike trains from neurons belonging to the visual region in the brain of monkeys, as recorded by brainresearchers (Krueger and Aiple). Instead of a monkey, the data could be taken directly from visitors without a noticeable difference. As for the other 2 spike trains, they consist of streams of pulses (spike trains) by means of which the viewer gets into direct contact with the neural circuit: the viewer can generate these spike trains by pressing keys on a keyboard (instead of getting directly wired).
In a biological neuron, some of the incoming spike trains have a stimulating effect (e.g. they increase its readiness to fire) whereas in others they have an inhibiting effect (e.g. they reduce its readiness to fire). Accordingly, in our installation, some spike trains have stimulating effects, while others have inhibiting effects on the simulated biological neuron. The output spike train is recoded as a binary code, i.e. as a sequence of 1's and 0's (1 = one spike, 0 = no spike), and in this form, which is easily understandable to all artificial computing machinery, it is transmitted to Processor 2. Processor 2 is a form generator, which uses the binary codes coming from Processor 1 to control a chain of commands: by means of a script, the incoming spike trains are so to speak transformed into data sets for 3-D solids which are then represented graphically. Processor 2 interprets the incoming data from Processor 1 in order to control a programme, by selecting and fixing the parameters in the chain of available commands. The core software, the mathematical-geometric potential of the 3D vector-program and the controlling script, are not changed by the incoming data from Processor 1. These invariants impose overall patterns on the generated pictures, similarly to the way in which the invariants of our individual human visual apparatus and brain impose overall patterns to the world of images which are perceived and reproduced by us, patterns of which we are not generally aware.
One can compare the interplay of random factors and fixed rules in the generation of images through the two processors with the functioning of a genetic system. Similarly to a DNA transcription, Processor 2 offers a chain of genetic elements (= commands), whose reading matrix filters out of the spike trains the RNA as edited information. The traditional procedure of anthropocentric creative design of built form is - according to our theory - shifted to the definition of a genome, which in turn generates new built forms.
The experiment shows that relatively few bits, represented by the stream of pulses of a single neuron, have sufficient complexity to produce designs for extremely diverse and sometimes even astonishingly novel visual creations. The same is true for the operators of the script, based on simple geometric commands: polyline, extrude, union, intersect, rotate, etc.
In this installation, the simulated biological neuron and genome are of an extreme simplicity compared to the information processing capacity of their biological counterpart. A neuron in our brain is not fed with spike trains from only 4 other neurons or sensors, but simultaneously receives 5000 to 10000 spike trains as inputs. Furthermore, the typical frequency of these spike trains is not on the order of 0,1 Hz, as in this installation, but on the order of 50 Hz. The processing of information in a biological neuron is 500 times faster than in this installation, in fact so fast that one could not follow the real spike trains with the naked eye.
This is also the reason why the visualization frequency of Processor 2 is reduced to a perceivable level.
The readability of development / growth / movement becomes possible as a sequence of single frames (image rate < 15 i/sec). Furthermore, it is not only one neuron that is processing and producing images in our brain, but about 40 different brain regions of more than 1010 (= 10 000 000 000) neurons. These 1010 neurons are interconnected by means of approximately 1014 links (axons, synapses and dendrites). The total length of these "wires" within only one cubic millimeter of grey matter in our cerebral cortex already reaches a length of approximately 4 km. This gives a rough idea of the complexity of the "data processor'' in our head. The controlling script of the form processor we have used (which has approximately 20 lines, uses 30 exec_commands, 256 colours/8bit, etc.) is of a similar simplicity when compared to the human genome (100.000 genes, 109 elements, etc.).
Nevertheless we can already draw the following observations from this simple experimental set-up:
(1) unforeseen results arise even out of a few elements / small numbers
(2) they lie within a spectrum inherent to the system
(3) different bit-string inputs (be it from monkeys, visitors, keyboard-strokes, light sensors or generated data sets) do not affect the characteristics of the system and the unforeseen nature of its results
(4) modifications of the output can be obtained through modifications of the program (script)
Despite the relatively low degree of complexity of this Neuronal Architecture Processor prototype, the results are more differentiated than those arising out of a traditional analogue architectural production (under the same base conditions). This has consequences for our understanding of art, architecture and human versus machine/digital creativity: it proves our thesis that creativity (in this case production of architectural images) is the result of a "neuronal machine" and as such can be released from the brain.6.)
PS. for Olafur:
Since I have been observing these developments, the
arguments have become more and more weak at the knees, and L. Wittgenstein cannot continue
to ask himself, what the world was. Already before Sokal´s Hoax, the
reliability of texts has gone. http://www.war-of-the-worlds.org/Radio/.....
Before, your wall with moss would have shown me the difference between architecture and
art, but since - neither in the one nor the other - it is no longer a matter of objects,
forms, colors and since I too am disappearing as a subject, it seems to be rather more a
matter of transfer. But as gathering for an encyclopaedia has somehow stopped, even the
transfer now seems inflationary, because since postmodernism, all forms have been allowed
anyway. And thanks to globalisation anything can be anywhere. Therefore nothing exotic
remains, nothing is autochtonic anymore, and even the esoteric know-how is faked: the moss
could come from Iceland or from the flower market, your waterfall could be a big soundbox
that deceives our senses with its roar? http://www.whatisthematrix.com/
What does our friend Schauberger say, that - since your sculpture disappeared from Venice
- your waterfall makes Möbius movements! In his case the water also flowed from bottom to
top. Flatland, a romance of many
dimensions, without any immigration limits. - Olafur, we should meet there, because my
page has been faked - be aware, that you are visiting a faked web site that offers
distorted information on my cause - it´s all a fucking good fake, cu asap - Plo!
1.) Fake ID´s