About simulation: tools to explore the project
In science, the idea of “model” goes beyond the mere geometrical description of a given object : it’s the simplified representation of a system in a certain way, we can in fact assert it is the system status. The idea of a model has a lot to do with the setting up of the structure allowing us to analyse, and then understand, a long process along its own development. Simulation is a tool allowing the model analysis, a multidimensional (geometrical, metrical, temporal…) investigation, gathering forces combining them selves, for the final system creation.
Simulation has justified emerging phenomena, those who have been defined subset of a wide cooperative interactions universe, generating various synergies, as in nature as well as in human society.*1 We’ve been adopting instruments to better understand these phenomena, leaving behind a mentality where emergency could be a a inexplicable aspect, we simply had to accept with resignation.
We can now describe simulation as maths in action, allowing to understand how dynamic changes alternate in the relationships of cause and effect between the different parts of the whole concept. It’s possible to realise it’s importance by observing generative art : taking for example Marius Waltz’s or Toxi’s works there’s no way to get directly to the definitive design just by looking to their programs’ code : it’s somehow possible to notice some of the behaviours in there, but it must be seen in action to be better understood, caught in its own entirety.
That’s what happens in nature too, where complexity is gradually reached : if we stop and think for example to DNA as source code and to environment as an interaction of forces affecting humans as system.
As we said, a given behaviour emerges from a system’s parts interaction, so this can be resumed in terms of abilities and properties. Abilities should be performed and usually considered as actions, as objects displacement thanks to the wheel’s rolling ability, for example.
Properties are inner aspects, instead, as the wheel’s being round. Properties and abilities exist as in a whole system as well as in sole parts : system’s abilities depend on its own properties, given by the single parts abilities’ interaction.
These two types are the base we use to define the system tendencies. Only a multitude can define a trend, in other words, a mass of interacting parts leaning toward the levelling off of the system in a status or another, according to inner abilities and their interactions.
Simulating tendencies allows structuring the “space of what is possible”, a space where to search for new configurations. That’s why simulation can be an effective tool for those into projects and a way to see differently.
Computation is a set of conditions to determine an outcome based on logical links, algorithms, protocols etc. As Terzidis says, computation is the base for the unclear and unknown intermediate processes exploration. For this reason and for its emulative nature, we can say computation can extend human intellect. *2
Computer science development, and the along coming ability to process a larger amount of informations at a much higher speed, make possible to consider using the computer “talents” to develop computational techniques.
Starting from the early fifties the first projects see the light, such as “Whirlwind”, the first computer machine able to give a real time feedback to users’ inputs (pilots in that case, as it was a flying simulator) or animated simulation, such as for example a famous Edward E Zajac shoot, from 1961, showing how a satellite could get steady in order to have a side always pointed to planet earth during its orbital flying path.
Using computers for design concerns right this shifting from experiences in managing deterministic experiences (a culture to which modern science is directly connected) to non-linear processes simulation.
“During that empire, the art of cartography reached such perfect standards that a single province map occupied the whole city physically, and the empire map could cover the entire province. As time passed by these gigantic maps and charts became inappropriate. A group of cartographers’ colleagues traced an empire map who could perfectly match the empire itself, in terms of size and measurements. But the succeeding generations evaluated these works as not properly suiting cartography studies; they thought it was simply a useless enormous map, to be left, not without impiety, to the inclemency of sun and winters”. *3
Simulation’s accuracy in resembling reality is particularly required by some specific fields, as aeronautical engineering for example, where tasks aim to verify surfaces performances in terms of aerodynamics. In the architectural research can be used as a “device” to generate re-subjectivations. *4 linking technical performances to formal language one another.
In order to make a generative tool out of simulation it’s necessary to consciously work on the system incompleteness, that is lack of informations. In scientific simulations chosen data are determined to have a model resembling at its best its real counterpart. What concerns architecture makes the choice and selection of informations crucial in order to reproduce “The Other”, the unknown.
On the one hand a limited number of parameters has as a result a more varied products range : increasing the parameters’ amount generates a disorder, defined by similar variations. When considering a high amount of components in a system, we will observe an increase of interaction between them, causing newly-generated behaviours, hardly detectable, and making diversity less remarkable as consequence. In the parameter itself, the values’ range becomes wider as results vary from each other. Parameters are therefor in inverse proportion to variety, while in parameters values are in direct proportion to a range. If for example we trace a path where our “agent” rotates by a random rate between 0° and 100°, the path will be affected by remarkable changes, making it highly varied.
Parameters and their range are elements to be managed and set up during the process.
Another useful area where to take advantage of investigating the unknown, is the one depending on misunderstandings between human and computer language. Human language is ambiguous, while program codes are simply not, this allows detecting an in-between intersection concerning the project.
Our tendency is to put an eye onto what’s already known, and that we assume as right. When investigating there’s no research for mistakes, they come up as a surprise, as natural contradiction of what has been previously thought. We’re not arguing about what is absolutely right or wrong (we’re not talking about performances), but, on the contrary, of what is fair or mistaken on the base of a mind-map we have already set up.
When’s simulation stopping? When given data are enough to understand the system. This is probably the only rule allowing the definition of three strategies. As previously mentioned, our aim could be to perform in a certain way, and we could then talk about the optimization of a system, such as morphogenetical simulations, guided by real forces involved, algorithms now included in softwares like Solidthinking. A second option could consider as a goal to obtain a not necessarily optimized point of balance, by simulating, for example, an over-flowery fibrous system where they are not forces them selves to determine its configuration, but interwaves.
When a process can be defined relying on generations, a selective tool can therefor be the planner’s sensitivity or those in his place, such as in Kar Sims Galapagos’ installation where the audience was called to take part to a simulated organisms evolution, picking creatures worth surviving just by aesthetic criteria.
This happens as well as in discrete as in continuous processes, and in this last second case it’s all about simply stop on purpose the simulating process, which is no longer justified.
Geometry has been a crucial instrument in the project culture development, but has also been depending on a representation of the final shape visual result, we need in fact a geometrical shape representation to figure out a project. This attitude could even turn out as a big limitation.
Talking about instruments, computers are still used mainly to represent projects as architecture is base on a project approach, defined by a hierarchical relation, giving way to the shape generative process then to his own materialization. In other words, we are used to face the morphological abilities of a material system only as second instance, after having considered its architectural shape : materialization, production and construction are as a matter of fact separate disciples engineered in agreement with a top-down model.
In organisms’ morphogenetic, the development and growth process generates polymorphic systems, achieving their own complex organisation and shape by material system inner abilities interactions and by external agents (environment).
Research in architecture is living a lively moment in debating such themes, changing its approach to a given project, learning how to develop a shape, matter and structure, as a whole single entity capable to define the emerging peculiarities embodied in the building : material characterization, geometrical behaviour, production limitations, all as one, organised in complex interactions.
Changing such attitude could affect even the “making of” usual configurations, in order to get out of that massive productions standards whirl, we got into starting from the industrial revolutions, and enforced by early twentieth century movements.
Thanks to computer science innovation, to 3D molecular computing and DNA computing, or even to quantum computing *5, we can figure out a future not so far with modelling softwares, with a kernel not any more based onto curves or surfaces definition, but onto molecular aggregations.
Since some time Euclideon 6* is busy in the development of an algorithm (Unlimited Detail Algorithm) managing a massive amount of Voxels, with minimal calculation efforts, with as goal the replacement of polygonal models with models made up by the connexion between minor 3D entities.
Up to now, they declare possible replacing polygons with a 64 particles per cubic millimetre definition, with surprising feedbacks in terms of final results.
It has been several years now that in physics studies on “nanosphere” have begun, where two are the main strategies studied for the molecular structuring control: the first one considers the chance to move particles and atoms thanks to electromagnetic fields, the second theory, more fascinating, is about learning how to build “clever” particles or atoms, able to independently place them selves properly, better said taking advantage of a self-sufficient molecular organization concept.
In nature there aren’t representation, but “codes” generating trains of consequences, as real time actions.
We’re now back to “codes”, as pioneers did in the sixties, we in fact got to understand how fickle it can be as investigation tool. We can write codes in a direct way (Processing, RhinoScript, Phyton, Coffee, Mel…) or in a indirect way (visual programming, Xpresso, Grasshopper, vvvv,…) to build up shapes throughout a geometrical synthesis. Shape’s just a code product in nature, and it becomes our tool to comprehend, as a product, and not as a matrix.
We now have to focus on identifying an efficient tool for the physical code expression, capable to transform forces into matter.
Time has come to change paradigm and to get rid off slavery to geometry when composing and creating.
1 Peter A. Corning, Re-emergence of “emergence”: a venerable cncept in search of a theory, Institute For the Study of Complex Systems, 2002
2 Kostas Terzidis, Algorithmic Architecture, Architectural Press, 2006
3 Luis Borges, Storia universale dell’infamia, “Etc.”,
4 Giorgio Agamben, Cos’è un dispositivo, Nottetempo, 2006
5 Ray Kurzweil, the singularity is near, Penguin Group, 2006