Landscape Architecture for Machines



One of the more interesting sub-conversations at last fall's Art + Environment Conference at the Nevada Museum of Art revolved around the question of whether or not the future of landscape architecture would be for humans at all—and not for autonomous or semi-autonomous machine systems that will have their own optical, textural, and haptic needs from the design of built space. As highway signage networks are adapted to assist with orienting driverless cars, for instance, we will see continent-spanning pieces of infrastructure designed not for human aesthetic needs but so that they more efficiently correspond to the instrumentation packages of machines.

We touched on this a few weeks ago here on BLDGBLOG with the idea of sentient geotextiles guiding unmanned aerial vehicles, and London-based design firm BERG refers to this as the rise of the robot-readable world. I was thus interested to see that Timo Arnall from BERG has assembled a short video archive asking, "How do robots see the world? How do they extract meaning from our streets, cities, media and from us?" Arnall's compilation reveals the framing geometries—a kind of entoptic graphic language native to machines—and directional refocusings deployed by these inhuman users of designed landscapes. Future gardens optimized for autonomous robot navigation.

Comments are moderated.

If it's not spam, it will appear here shortly!


Post a Comment