Procedural Modeling of Urban Environments

It’s Sunday, but also day one of SIGGRAPH 2006 in Boston. I’m sitting in room 157 of the Boston Convention and Exhibition Center, waiting for the start of the first course I wanted to attend: Procedural Modeling of Urban Environments. I’m blogging from the side of the room thanks to my Verizon broadband wireless card. I’m going to basically takes notes as I might for myself, and publish them in the blog. I’m sure others will find some of this interesting. It also makes me research and reinforce the notes in my own head, makes sure I clean them up so they are coherent, that I put in good links for followup, and that I can get back to them whenever I want. Besides the blogging-in-the-moment (four hours of presentations and notes), I’ve spent about another hour cleaning them up. Not likely I’ll be able to do that with all the presentations I go to, but it is useful when the topic is important and interesting enough. Yes, I’m experimenting with what and how I blog. ๐Ÿ˜€

Since the session (and the blog) are so long, let me outline the parts, so you can skip to things that are of particular interest if you like:

  1. Peter Wonka on architectural modeling and graphics (individual buildings)
  2. Eric Hanson, computer graphics expert doing lots of big picture Hollyword work in this area
  3. Pascal Mueller on city modeling and graphics (building, streets, yards, vegetation)
  4. Benjamin Watson on modeling land use and dynamic city organization with an urban growth simulator

Peter Wonka on architectural modeling and graphics

Peter Wonka from Arizona State. His emphasis is on shape, textures, mesh modeling, roofs, inside layouts and shape grammars. Much of his presentation seems to be a walkthrough of his SIGGRAPH paper for this year, “Procedure Modeling of Buildings”.

His work begins with literature, as in books with many figures and photographs, and books that emphasize structure. Also lots of photos and lots of CAD models. He gives an example of a book presentation of an Ionic Frieze and the temple it was used on: this drawn historical diagram can serve as the basis of a new model.

Why procedural modeling for architecure? Because design elements carries across many buildings over time, and even spans cultures – Greek columns are his example. Styles are also consistent, so they can be developed as model templates, and reused easily.

Different shapes and styles of windows, doors and ledges are classic elements which can be added to a basic building shape to make it unique.

He identified related work:

Procedural Modeling of Cities (2001)

Instant Architecture (2003)

Procedural Modeling of Buildings (2006)

Modeling architectures with Grammars:

  • Model 1 began with Strings and Rules of string replacement
  • Model 2 is based on Shapes and Rules of shape replacement. Derivation is done until resulting set contains only terminal shapes. Geometric interpretation of Shapes is the final step.

CityEngine is proprietary software they use for city modeling – overhead street view on left and orthogonal bird’s-eye view on the right. It’s a very iterative process – no magic wand. See more about that later in Pascal’s presentation.

The two sides of modeling with grammars are the framework (syntax and semantics of the grammar) and the actual design knowledge (expressed as rules supported by the framework). There’s a balance to be struck between power of the framework and complexity of the rules required by the framework. The extreme he does not recommend is just using C++ to both implement the framework, and to write the rules in it.

Rule format they use is based on L systems, but with extensions as needed for architecture. I had never even heard of L-systems before the talk, and discovered very two cool things about them. First, they were developed in the 1960s to model the fractal growth processes of plants. And second, one of our KPL v 1.1 programs, Trigraph.kpl, contributed by father and son Corwin and Jan Slater, is an implementation of the Sierpinski triangle, which is also a famous example that can be implemented using a L-system.

Peter goes into some detail about their rule language; he identifies some limitations of Split Grammars; he mentions that they are not attempting to replace what tools like Maya do so well already.

He proposes that we can combine various techniques to reach a better result. For instance, we reconstruct an previous model by generating and rearranging the semantic model (shapes), then applying new texture blocks to it (surfaces and textures). This is a quick path to an entirely new model.

He notes that from his experience, “build by numbers” is a lot more reliable than a digital capture – the real world just isn’t controllable enough to provide perfect lighting and perfect pictures, with no passersby or parked cars, etc…

He gets into the Generative Modeling Language, which he describes as “postscript for models.” This particular page of using GML to model a Gothic window reminds me of my Saint Chappelle story, which I’ll tell one of these days (this blog is long enough already!).

Peter gets into Roof Construction Algorithms, and says there’s still a lot of work to do here, as roof modeling is more complex than facades. Automatically Generating Roof Models from Building Footprints is research he recommends.

He mentions his Floorplan modeling is still based on a 10-year old paper that continues to work well, but I didn’t get his reference on that point.

Stiny’s work on Shape Grammars (lots of references and material online) is good old stuff he recommends and still uses.

Eric Hanson, computer graphics expert doing lots of Hollywood big picture CG work

Eric Hanson takes over to talk about how he does this kind of work on feature films. He’s finding a lot more work now on this that he did when he started… I think he even said “Golden Age” about CG in film – but that implies it’ll end some day. I don’t think so. ๐Ÿ™‚

He talks a bit about the historical movement between realism in film (mentions the 60s and the 70s for this) and usage of sets (digital or constructed). Even in the short history of film we have gone back on forth on this point stylistically more than once. Interesting meta point I just thought of: when the tools and performance are good enough, digital technology will make it entirely possible to make digital film which as much stylistically “real” as the movies he mentions from teh 60s and 70s.

He does admit that the difficulty of digital sets are often underestimated, especially by “those in charge” who don’t understand the technical details and complexity involved.

He recommends studying basic elements of architectural design and history, understanding rich lighting and textures, and maintaining a common sense approach to production pipelines (architecture is not done as characters are done).

Best to include some components of photographed or filmed reality – it enhances and sells the scene better, and also lessens stress on the team compared to full CG.

Films to look back on when considering city scape special effects: Metropolis (1927), Things to Come (1936), Citizen Kane (1941), Blade Runner (1982), Hudsucker Proxy (1994), Judge Dredd (1995).

A key technique: how did they deal with Parallax (the apparent shift of an object against a background due to a change in observer position)? Moving the camera can make CG difficult and interesting, but understanding parallax can also identify the transitions, which are where we can “cheat” a little, such as by moving from sneaking 2.5D to 2D into the shot.

Recent great achievements in CG: King Kong, Lord of the Rings, Day After Tomorrow, Star Wars 1 and 2 and 3, Spiderman 1 and 2, The Matrix.

He shows 2001: A Space Odyssey apeman scene that’s a cheat, showing the line between the rear screen projection and the physical set with a man in an ape suit. He gives more modern examples, from Bicentennial Man – the view of Washinton, D.C. The more they can reuse real film, the less 3D modeling they actually have to do. Far enough in the back of a 3D scene they can also cut in 2.5D models, and farther back simple matte painted 2D background. In the scene he did in Bicentennial Man he pointed out the real film in the foreground (pedestrians and byclists), the fully-rendered 3D behind them, the 2.5D behind the 3D, and the 2D painting in the far background.

He shows a Tom Hanks scene from castaway, which combined blue screen film of Hanks, 2.5D rocks added in the background, rotoscoping, real film of the sea at Fiji, and digital creation of wave effects. All in one shot/scene.

The basic approaches approaches to use:

  1. Full 3D modeled construction (hardest and most expensive)
  2. Set extension with live plate
  3. Nodal pan from panoramic image
  4. Camera projection/2.5D matte painting

1. Fully 3D modeled – it took them 6 ot 7 man years (over 1 calendar year) to build 110 building in NYC for Day After Tomorrow. This is best when there are many shots, as all the modeling can be reused for each. Tools and power to do this are much better than they were. A down side is that it’s hard to manage the high geometry weight (meaning the very large size of the data required to describe all the models). Ironically, the flexibility this gives to a director can cause problems, if the director endlessly iterates and tweaks just because he can.

2. Set Extension w/ Live Action Plate. Currently the most common use of digital sets, this marries live action and CGI. Problem is that it’s inflexible – the film’s live action defines exactly what the CGI must be to match it. He shows a cool example of that work from King Kong – a live street scene set, with the entire skyscape above it CGI-rendered. Interesting point is that lens distortion in the film has to be removed before it is merged with the digital, but then needs to be put back in for the final shot to look and feel like film.

3. Nodal Pan. Basically this is the technique of panning the camera over a very large fixed image. Anime is famous for using this technique to make the cartoon more cheaple. It mainly used for establishing shots, not for action subjects. There is limited camera movement – pan, tilt and zoom. It is possible to stitch together single shots into a large panorama, and then use the panorama this way. He showed doing this with 24 actual astronaut shots – making them into a panorama which they then panned over for a scene in Apollo 13.

4. 2.5D matte painting / camera projection. Very widespread currently.

Strategies for managing 3D rendering:

  • Decide on strategy(ies)
  • Establish modeling standards
  • Automate process with tools (often proprietary)
  • Use of Level of Detail (LOD)
  • Use of Delayed Read Rib Archiving
  • Use of File Management System
  • Baking out frequently and Caching GI
  • Use of Displacement Mapping vs Modeling

(these are getting too technical for me to explain or link to them!)

Real world size is certain death, because you quickly run into floating point math problems. Sometimes they avoid this by doing all integer math across a vast scale – no floating point errors in integer math. They also sometimes shrink a scene to avoid the problem: CAD tools were built for modeling mice to ships, not city scale.

He recommends mapping 2D surface details first as textures, not modeling it. Cheaper to just apply it as a texture to a model.

NURBS vs Polygons. That’s nonuniform rational B-splines, by the way. SIGGRAPH is so technical he didn’t bother to explain. ๐Ÿ˜€ Polygons allow faster modeling but slower texturing, so that’s a wash. Polygons do allow better optimization of texture load (RAM). NURBS render faster in RenderMan. Polys render faster in Mental Ray and GI apps.

NURBS modeling technique – continuing strips rather than split polygon tiles. He shows use of Maya to handle tesselation with NURBS.

Polygonal modeling can require less parsing due to combining objects. Tools for manipulating poly faces much better than for NURBS.

He shows a million-poly example (ouch) of a streetscape with skyscraper faces, something like New York. It’s only the lower part of a few skyscrapers…

He gets into the various ways of building model propogration:

  • Random/brute force (such as by hand with Maya Geometry Paint)
  • Defined by existing reality (used in Day After Tomorrow)
  • Purely procedural (the bulk of what the others talk about in this course)
  • Custom tools to manage reference files and asset blocks (he uses a lot of this)

He maintains a collection of building and city models, adds to it over time, and reuses them – this allows for quick population of a shot or scene.

He shows a case study of building the cave in Peter Pan – combination of many tools and techniques.

He shows a case study of Day After Tomorrow – again a combination, but they’re able to use much more real world data. They digitized photos at the pixel level, and from that generate building skeletons – which of course needed much more work. They also took lots of texturing from photos. He told a cool story about how New York Library library wouldn’t let them shoot the building – because of the book burning scene in the movie! – so they just modeled the building to be slightly different. ๐Ÿ˜€ He showed another cool example of using texture to handle gargoyles and details at the top of a building – those had more polygons in them then the rest of the building!

He’s doing a gigapixel project now – Image-Based Terrain – check this out in the SIGGRAPH Guerrilla Studio on Wednesday afternoon. Check out www.xRez.com in a month to see this in action.

Pascal Mueller on Procedural City Modeling

Previous work: Procedural Modeling of Cities (2001)

CityEngine software: 6 developers, 97,000 lines of Linux code

He spends some time showing and explaining CityEngine usage

He explains use of Extended L-systems for Streets (again L-systems), and various models for street generation

He moves into construction of a 3D road model out of vector street data. He calls this pretty easy: use mesh computation for crossings, lanes and sidewalks (they’re just surfaces), then use logic to place common objects (lights, streetlamps, mailboxes, etc…).

He moves into procedural generation of parcels of land. Roads imply lots, lots divide into parcels. He recommends KML (Keyhole Markup Language) as a relevant GIS format for this. SHP was used a lot in the past, an ASCII format, and still perhaps most-used. DXF in another option.

He moves into modeling of buildings with city-wide variation. Input data that influences the final appearance:

  • Shape of parcel/footprint (influences shape and size)
  • Population density (influences size and function of building)
  • Land use map (influences zoning)
  • Steets (influences front of building, function of building)

Stochastic rules must be used for variety, but can only be taken so far or the model will devolve to chaos. The control grammar keeps that randomness from going to far. A user-guided rule section allows for manual adjusment.

He shows an example of using CityEngine to reconstruct Pompeii. Started with street map, population density, land use. Worked with architects to build model details of building shapes, facades, doorways, windows, other such elements. Used CityEngine to generate the city-wide model based on this, down even to the detailed model of every building. Can manipulate and adjust all this in CityEngine. Proceeded further to a graphical rendering of fly-through scenes of Pompeii – nice!

Another example: reconstruct a model of a Mayan city. Started with good GIS data, collaborated with archaeologists. Good news is style was very consistent – but no formal design pattern had been previously published. They created one, and a rule set, and an elements set – each of those took one day to do. They give this CityEngine model to the architects who fill in detailed patterns of this model to produce the final model.

Shows a few more cool examples of 3D city scapes he has rendered. Height and shape of buildings was very interestingly varied.

He’s working now with CAAD – Computer Aided Architecture and Design. Working to create 3D models of the “Australian Continent” for the new Dubai World Islands. I had never even heard of them: unbelievable! Worth watching the movie linked from this page. Makes you want to burn more gas and send more money that way, doesn’t it? ๐Ÿ˜€

He’s working now on revisiting (as in 3D modelling) Le Corbusier’s Contemporary City, from the 20s.

He shows an interesting example of combining art deco style plus international style to produce a new post-modern style. This makes me think not just of creating such designs for use in games and movies, but for use by architects in creating new real-world architectural styles. It seems inevitable to me that these tools will be used that way.

He moves on to Transformations in Design (over time). Reuse, evolution and design combinations. Look for his movie example online – shows quite cool morphing from one building design to a completely different one, and interesting emergent designs that came out of that.

He gets into rule-based distribution of vegetation – he uses Greenworks’ Xfrog for biologically correct vegetation shapes. Shows some cool examples of that. I need to mail them about Phrogram. ๐Ÿ˜€

He says real-time rendering techniques of city scenes are just not there yet. More research and more power is needed.

Offline rendering works now, and he recommends RenderMan, but only with binaries and compression, because these are such huge datasets.

He recommends DSOs (Dynamic Shared Objects) for procedural shaders

Ambient Occulsion with RenderMan is the choice for exterior lighting.

Benjamin Watson on Modeling Land Use With Urban Simulation

Goals:

  • Automatic placement of buildings (not building generation yet)
  • City layouts should be convincing and typical but not completely novel
  • Controlled automation means minimal effort and maximum effect, as well as controllable by user

He overviews some of the different examples of urban planning and urban geography. His work tends to draw more from urban geography, but tends to make it more detailed.

But their simulation does have some different-from usual goals:

  • Non-existent places minimize input and prediction requirements
  • User control means processes don’t have to be completely accurate, but instead have to be convincing

Their approach is agent-based, similar to flocking or particles. They model the terrain and structures of the urban environment in this way, organically, by letting the agents do their thing over time.

EA loaned them the SimCity3000 engine for use as their rendering engine – this is cool, but of course imposes constraints: all lots are rectangular, no buildings can be placed on inclines, there are no exits on or off of multi-lane roads, etc… The point of SimCity of course is NOT to automate – the user builds it. On the other hand, it renders very nicely!

He shows a cool example – a SimCity with two urban cores, the cores containing skyscrapers. Highways look organic and natural entering and crossing the map.

He shows an example of asking students to prototype an actual neighborhood in Berlin, and another in Madrid – to show just how far user control can go with these tools.

He shows a land use map they generated, with residential, commercial, industrial and park areas, all connected by roads. He shows another example in which the user specifies an area to be filled with commercial development – because the simulation is emergent or develops over time, this kind of control is accomplished throught a “honey” model, in which the honey naturally attracts the agents of the type the user wants at that part of the map.

In general, their simulation is a gridded technique, like a GIS – it’s not granular to subparcels yet.

One advantage of their technique is that you can simulate the growth over time. This also matches better the reality of how cities (especially old ones) have evolved over time.

Input can be done starting with a blank map, or many details and constraints can be provided.

Another property of urban development is clustering – this, too, can be controlled somewhat by controlling the value of proximity.

Interruptibility means that an area can be wiped out and redone (just like in Sim City). Again, it’s “honey” that’s placed in that wiped out area that will encourage the agents to organically redo that area the way the user wants them to.

Developer agents build:

  • Structures in the environment
  • Property (res, ind, com, and park types)
  • Roads (primary, access extenders or access connectors)

Property developers move toward and build on the currently most valuable land. They might build if its empty or increase density if not. If value goes up as a result, they commit to that development.

Value is the key that drives development.

Residential values:

  • Near water
  • Near other residents
  • View (higher than average)
  • Far from industry

Commercial values:

  • Near market
  • Near customers
  • Near roads
  • Near water
  • Flat land
  • Away from parks

Industrial values:

  • Flad land
  • Near water
  • Near industry
  • Near roads
  • Far from residents
  • Far from parks

Park values:

  • Near other parks
  • Far from industry/commercial
  • Not valued by other uses
  • Hilly terrain
  • Near water
  • Near residents

value = constraints * (importance * terrain) + honey

terrain vectors relate to proximity to water, elevation, etc…

He gets into a comparison between the sim urban planning results, and real-world urban planning results. Houston versus Sim was really fairly similar. Consider what that means for a moment, please. It means that our own US values of urban planning and growth are nicely desribed by the values stated above. Makes one think, don’t it?

Future work they will add to the simulation:

  • Speed
  • Mixed use
  • Better parcels
  • Deeper road heirarchy
  • Higher level control
  • History, culture (it’s optimum for US now), “character”
Advertisements

4 thoughts on “Procedural Modeling of Urban Environments

  1. […] I’ve posted previously on procedurally generated content, from a SIGGRAPH 2006 course on procedurally generated cities. WW: Our team is probably around 80 people right now. We have a disproportionately large number of programmers on this team and a small number of artists, because of all the procedural content. So, probably 40 percent of the team is programmers, which is pretty high. The art staff is probably about a third of the size of [the art staff assigned to] a typical EA game. And all our artists are very technical as well, so they’re doing a lot of the programming and scripting. […]

  2. Tom says:

    Thanks. This is really interesting stuff – I’d love to see a Simcity in the future that uses Pascal Mueller’s building generation – you could have options that determine the style of your city’s buildings.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: