Monthly Archives: July 2006

Emerging Technologies at SIGGRAPH 2006

From MIT’s Media Lab, the IO Brush (check out the video at the link) 

I/O Brush is a new drawing tool to explore colors, textures, and movements found in everyday materials by “picking up” and drawing with them. I/O Brush looks like a regular physical paintbrush but has a small video camera with lights and touch sensors embedded inside. Outside of the drawing canvas, the brush can pick up color, texture, and movement of a brushed surface. On the canvas, artists can draw with the special “ink” they just picked up from their immediate environment.

Also from the Media Lab, Topobo

Topobo is a 3D constructive assembly system with kinetic memory, the ability to record and playback physical motion. Unique among modeling systems is Topobo’s coincident physical input and output behaviors. By snapping together a combination of Passive (static) and Active (motorized) components, people can quickly assemble dynamic biomorphic forms like animals and skeletons with Topobo, animate those forms by pushing, pulling, and twisting them, and observe the system repeatedly play back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements and walk repeatedly. The same way people can learn about static structures playing with building blocks, they can learn about dynamic structures playing with Topobo.

Also from the Media Lab, Audiopad.  The most common theme I saw was tabletops of one type or another, which did interesting things as people manipulated the surface of the table, or objects on it.  Audiopad was one of these: a techno music composition system based on manipulated objects on the surface of the table, with a projector providing cues and feedback directly on the surface. 

From the University of Tokyo, a Forehead Retina System

A small camera and 512 forehead-mounted electrodes capture the frontal view, extract outlines, and convert the data to tactile electrical stimulation. The system is primarily designed for the visually impaired, but it can be a third eye for users with normal sight. 

That really doesn’t do justice to the importance of this.  A headband with an array of electrodes rests on the forehead.  A small camera with a real-time video feed converts the video into shapes.  The shapes in the video feed are used to activate the electrode array, and translate the video signal into a tactile sensation which the wearer feels on the forehead.  Yes, it allows blind people to perceive objects in front of them.  How’s this for a worthwhile project?

Goals
According to a 2003 World Health Organization report, up to 45 million people are totally blind, while 135 million live with low vision. However, there is no standard visual substitution system that can be conveniently used in daily life. The goal of this project is to provide a cheap, lightweight, yet fully functional system that provides rich, dynamic 2D information to the blind.

From Mitsubishi Electric Research Lab, Submerging Technologies

These made people smile more than anything else there, I think.  They’re so new that it’s difficult to find information or photos of them online.  There were three displays in the set.  First, a tabletop filled with water, with a video display below it.  Little triangles like paper airplanes would move and flow on the display surface – but their movements were a realistic result of manipulating the water in the tank.  Make a wave, the shapes are pushed away.  I’m not doing it justice!  Second was a water harp built of musical “strings,” which were actually made of streams of water – interrupt one of the stream/strings and a musical note sounds.  Run your hand down the device and it’s like running your hand across a piano’s keys.  Lots of smiles from this one, too.  Last was a “tantalizing fountain.”  By default it sprayed water in a kind of half-dome shape down into the pool.  But as you moved you hand forward to touch the water, the shape of the water spray would adjust, to keep the stream of water away from your approaching hand!  Paul Dietz is the senior researcher on these projects.  He was there joking about how wet they all got as they were “debugging” the displays.  First time I’ve heard of those two concepts going together,  lol.  I suspect you really like your job, Paul?  🙂

Last is from the University of Tokyo and NTT, Tablescape Plus

A new type of display system for digital kiosks, multiple-aspect viewers, and tabletop theater that uses placed objects as projection screens and input devices.

Once again, this is such new stuff there’s nothing about it online yet, and that description doesn’t really do it justice.  Let me try to describe a couple of the interesting examples I saw.  Picture a fully rendered 3D car model displayed on a flat screen TV.  In front of it on the flat tabletop is a physical model of the same car.  Want to move or turn the car on the screen?  OK, move or turn the car on the tabletop.  Presto, the car moves to match, on the screen.  The more complex example had geometric shapes, a balloon model, and a couple of other items – again on the screen, and on the flat tabletop.  The screen display was enough like the table display that when someone reached a hand in to move an object on the table, the natural reaction was to look for the hand on the screen display – but of course it was not there.  And that, of course, is the point. 

The rendering on the screen was completely realtime.  The applications of this for gaming are maybe most obvious.  How about remote gaming with friends, with the virtual table top in the middle being where we see our game pieces combined.

Another really obvious application of this?  Machinima!  Think of using this to do a kind of digital claymation.  After all, the things as rendered on the digital display do not have to match the physical object on the table top.  Take it further so that the actual movements as rendered include natural movement (like steps forward when an avatar moves forward, or the swinging of a sword as my knight takes your cowering bishop).  Innnnteresting!

A couple fun KPL contacts from SIGGRAPH day 1

Estaban Clua is a professor of Computer Science at PUC Rio in Brazil. I met him last January at the Academic Days on Gaming conference. He asked me to participate in the SIGGRAPH panel on Computer Science and Gaming – and is also already using KPL to teach at his university. He has submitted a similar panel to GDC, the Game Developer Conference, and asked me to be part of that. The things that are happening with Phrogram are going to make GDC a particularly fun place to be next year.

Esteban is working on a textbook around KPL programming – I’m having lunch with him tomorrow and will find out more about how that’s going. His book will be in Brazilian Portuguese, of course – but the round-the-world translation of the KPL IDE, website site and supporting material is one of the cooler things about how KPL is happening, so I don’t think it’s a stretch to think we might arrange for it to be translated quickly.

Esteban was the most important person I wanted to catch up with here at SIGGRAPH, and the synchronicity couldn’t have been better. I hadn’t even left the registration room this morning before I saw him waving and coming to meet me. And this is a very very big conference…

Another fun meeting – this time with someone I had not met yet – was with Peter Border. He’s a Physicist at the University of Minnesota – but he’s been using game programming for years as a way of teaching physics and mechanics. He’s looking now at moving that teaching down to the high school level, and the idea of using KPL or Phrogram for that is pretty obvious. We’ll talk more about that, I’m sure. He’s presenting on the educator track as well, A Data Visualization Course at an Art School. This is a really cool example of cross-discipline education, teaching computer graphs and graphics in an art school course. Lots of that is going on, and more will be, as digital media and entertainment make their way further into university education.

Procedural Modeling of Urban Environments

It’s Sunday, but also day one of SIGGRAPH 2006 in Boston. I’m sitting in room 157 of the Boston Convention and Exhibition Center, waiting for the start of the first course I wanted to attend: Procedural Modeling of Urban Environments. I’m blogging from the side of the room thanks to my Verizon broadband wireless card. I’m going to basically takes notes as I might for myself, and publish them in the blog. I’m sure others will find some of this interesting. It also makes me research and reinforce the notes in my own head, makes sure I clean them up so they are coherent, that I put in good links for followup, and that I can get back to them whenever I want. Besides the blogging-in-the-moment (four hours of presentations and notes), I’ve spent about another hour cleaning them up. Not likely I’ll be able to do that with all the presentations I go to, but it is useful when the topic is important and interesting enough. Yes, I’m experimenting with what and how I blog. 😀

Since the session (and the blog) are so long, let me outline the parts, so you can skip to things that are of particular interest if you like:

  1. Peter Wonka on architectural modeling and graphics (individual buildings)
  2. Eric Hanson, computer graphics expert doing lots of big picture Hollyword work in this area
  3. Pascal Mueller on city modeling and graphics (building, streets, yards, vegetation)
  4. Benjamin Watson on modeling land use and dynamic city organization with an urban growth simulator

Peter Wonka on architectural modeling and graphics

Peter Wonka from Arizona State. His emphasis is on shape, textures, mesh modeling, roofs, inside layouts and shape grammars. Much of his presentation seems to be a walkthrough of his SIGGRAPH paper for this year, “Procedure Modeling of Buildings”.

His work begins with literature, as in books with many figures and photographs, and books that emphasize structure. Also lots of photos and lots of CAD models. He gives an example of a book presentation of an Ionic Frieze and the temple it was used on: this drawn historical diagram can serve as the basis of a new model.

Why procedural modeling for architecure? Because design elements carries across many buildings over time, and even spans cultures – Greek columns are his example. Styles are also consistent, so they can be developed as model templates, and reused easily.

Different shapes and styles of windows, doors and ledges are classic elements which can be added to a basic building shape to make it unique.

He identified related work:

Procedural Modeling of Cities (2001)

Instant Architecture (2003)

Procedural Modeling of Buildings (2006)

Modeling architectures with Grammars:

  • Model 1 began with Strings and Rules of string replacement
  • Model 2 is based on Shapes and Rules of shape replacement. Derivation is done until resulting set contains only terminal shapes. Geometric interpretation of Shapes is the final step.

CityEngine is proprietary software they use for city modeling – overhead street view on left and orthogonal bird’s-eye view on the right. It’s a very iterative process – no magic wand. See more about that later in Pascal’s presentation.

The two sides of modeling with grammars are the framework (syntax and semantics of the grammar) and the actual design knowledge (expressed as rules supported by the framework). There’s a balance to be struck between power of the framework and complexity of the rules required by the framework. The extreme he does not recommend is just using C++ to both implement the framework, and to write the rules in it.

Rule format they use is based on L systems, but with extensions as needed for architecture. I had never even heard of L-systems before the talk, and discovered very two cool things about them. First, they were developed in the 1960s to model the fractal growth processes of plants. And second, one of our KPL v 1.1 programs, Trigraph.kpl, contributed by father and son Corwin and Jan Slater, is an implementation of the Sierpinski triangle, which is also a famous example that can be implemented using a L-system.

Peter goes into some detail about their rule language; he identifies some limitations of Split Grammars; he mentions that they are not attempting to replace what tools like Maya do so well already.

He proposes that we can combine various techniques to reach a better result. For instance, we reconstruct an previous model by generating and rearranging the semantic model (shapes), then applying new texture blocks to it (surfaces and textures). This is a quick path to an entirely new model.

He notes that from his experience, “build by numbers” is a lot more reliable than a digital capture – the real world just isn’t controllable enough to provide perfect lighting and perfect pictures, with no passersby or parked cars, etc…

He gets into the Generative Modeling Language, which he describes as “postscript for models.” This particular page of using GML to model a Gothic window reminds me of my Saint Chappelle story, which I’ll tell one of these days (this blog is long enough already!).

Peter gets into Roof Construction Algorithms, and says there’s still a lot of work to do here, as roof modeling is more complex than facades. Automatically Generating Roof Models from Building Footprints is research he recommends.

He mentions his Floorplan modeling is still based on a 10-year old paper that continues to work well, but I didn’t get his reference on that point.

Stiny’s work on Shape Grammars (lots of references and material online) is good old stuff he recommends and still uses.

Eric Hanson, computer graphics expert doing lots of Hollywood big picture CG work

Eric Hanson takes over to talk about how he does this kind of work on feature films. He’s finding a lot more work now on this that he did when he started… I think he even said “Golden Age” about CG in film – but that implies it’ll end some day. I don’t think so. 🙂

He talks a bit about the historical movement between realism in film (mentions the 60s and the 70s for this) and usage of sets (digital or constructed). Even in the short history of film we have gone back on forth on this point stylistically more than once. Interesting meta point I just thought of: when the tools and performance are good enough, digital technology will make it entirely possible to make digital film which as much stylistically “real” as the movies he mentions from teh 60s and 70s.

He does admit that the difficulty of digital sets are often underestimated, especially by “those in charge” who don’t understand the technical details and complexity involved.

He recommends studying basic elements of architectural design and history, understanding rich lighting and textures, and maintaining a common sense approach to production pipelines (architecture is not done as characters are done).

Best to include some components of photographed or filmed reality – it enhances and sells the scene better, and also lessens stress on the team compared to full CG.

Films to look back on when considering city scape special effects: Metropolis (1927), Things to Come (1936), Citizen Kane (1941), Blade Runner (1982), Hudsucker Proxy (1994), Judge Dredd (1995).

A key technique: how did they deal with Parallax (the apparent shift of an object against a background due to a change in observer position)? Moving the camera can make CG difficult and interesting, but understanding parallax can also identify the transitions, which are where we can “cheat” a little, such as by moving from sneaking 2.5D to 2D into the shot.

Recent great achievements in CG: King Kong, Lord of the Rings, Day After Tomorrow, Star Wars 1 and 2 and 3, Spiderman 1 and 2, The Matrix.

He shows 2001: A Space Odyssey apeman scene that’s a cheat, showing the line between the rear screen projection and the physical set with a man in an ape suit. He gives more modern examples, from Bicentennial Man – the view of Washinton, D.C. The more they can reuse real film, the less 3D modeling they actually have to do. Far enough in the back of a 3D scene they can also cut in 2.5D models, and farther back simple matte painted 2D background. In the scene he did in Bicentennial Man he pointed out the real film in the foreground (pedestrians and byclists), the fully-rendered 3D behind them, the 2.5D behind the 3D, and the 2D painting in the far background.

He shows a Tom Hanks scene from castaway, which combined blue screen film of Hanks, 2.5D rocks added in the background, rotoscoping, real film of the sea at Fiji, and digital creation of wave effects. All in one shot/scene.

The basic approaches approaches to use:

  1. Full 3D modeled construction (hardest and most expensive)
  2. Set extension with live plate
  3. Nodal pan from panoramic image
  4. Camera projection/2.5D matte painting

1. Fully 3D modeled – it took them 6 ot 7 man years (over 1 calendar year) to build 110 building in NYC for Day After Tomorrow. This is best when there are many shots, as all the modeling can be reused for each. Tools and power to do this are much better than they were. A down side is that it’s hard to manage the high geometry weight (meaning the very large size of the data required to describe all the models). Ironically, the flexibility this gives to a director can cause problems, if the director endlessly iterates and tweaks just because he can.

2. Set Extension w/ Live Action Plate. Currently the most common use of digital sets, this marries live action and CGI. Problem is that it’s inflexible – the film’s live action defines exactly what the CGI must be to match it. He shows a cool example of that work from King Kong – a live street scene set, with the entire skyscape above it CGI-rendered. Interesting point is that lens distortion in the film has to be removed before it is merged with the digital, but then needs to be put back in for the final shot to look and feel like film.

3. Nodal Pan. Basically this is the technique of panning the camera over a very large fixed image. Anime is famous for using this technique to make the cartoon more cheaple. It mainly used for establishing shots, not for action subjects. There is limited camera movement – pan, tilt and zoom. It is possible to stitch together single shots into a large panorama, and then use the panorama this way. He showed doing this with 24 actual astronaut shots – making them into a panorama which they then panned over for a scene in Apollo 13.

4. 2.5D matte painting / camera projection. Very widespread currently.

Strategies for managing 3D rendering:

  • Decide on strategy(ies)
  • Establish modeling standards
  • Automate process with tools (often proprietary)
  • Use of Level of Detail (LOD)
  • Use of Delayed Read Rib Archiving
  • Use of File Management System
  • Baking out frequently and Caching GI
  • Use of Displacement Mapping vs Modeling

(these are getting too technical for me to explain or link to them!)

Real world size is certain death, because you quickly run into floating point math problems. Sometimes they avoid this by doing all integer math across a vast scale – no floating point errors in integer math. They also sometimes shrink a scene to avoid the problem: CAD tools were built for modeling mice to ships, not city scale.

He recommends mapping 2D surface details first as textures, not modeling it. Cheaper to just apply it as a texture to a model.

NURBS vs Polygons. That’s nonuniform rational B-splines, by the way. SIGGRAPH is so technical he didn’t bother to explain. 😀 Polygons allow faster modeling but slower texturing, so that’s a wash. Polygons do allow better optimization of texture load (RAM). NURBS render faster in RenderMan. Polys render faster in Mental Ray and GI apps.

NURBS modeling technique – continuing strips rather than split polygon tiles. He shows use of Maya to handle tesselation with NURBS.

Polygonal modeling can require less parsing due to combining objects. Tools for manipulating poly faces much better than for NURBS.

He shows a million-poly example (ouch) of a streetscape with skyscraper faces, something like New York. It’s only the lower part of a few skyscrapers…

He gets into the various ways of building model propogration:

  • Random/brute force (such as by hand with Maya Geometry Paint)
  • Defined by existing reality (used in Day After Tomorrow)
  • Purely procedural (the bulk of what the others talk about in this course)
  • Custom tools to manage reference files and asset blocks (he uses a lot of this)

He maintains a collection of building and city models, adds to it over time, and reuses them – this allows for quick population of a shot or scene.

He shows a case study of building the cave in Peter Pan – combination of many tools and techniques.

He shows a case study of Day After Tomorrow – again a combination, but they’re able to use much more real world data. They digitized photos at the pixel level, and from that generate building skeletons – which of course needed much more work. They also took lots of texturing from photos. He told a cool story about how New York Library library wouldn’t let them shoot the building – because of the book burning scene in the movie! – so they just modeled the building to be slightly different. 😀 He showed another cool example of using texture to handle gargoyles and details at the top of a building – those had more polygons in them then the rest of the building!

He’s doing a gigapixel project now – Image-Based Terrain – check this out in the SIGGRAPH Guerrilla Studio on Wednesday afternoon. Check out www.xRez.com in a month to see this in action.

Pascal Mueller on Procedural City Modeling

Previous work: Procedural Modeling of Cities (2001)

CityEngine software: 6 developers, 97,000 lines of Linux code

He spends some time showing and explaining CityEngine usage

He explains use of Extended L-systems for Streets (again L-systems), and various models for street generation

He moves into construction of a 3D road model out of vector street data. He calls this pretty easy: use mesh computation for crossings, lanes and sidewalks (they’re just surfaces), then use logic to place common objects (lights, streetlamps, mailboxes, etc…).

He moves into procedural generation of parcels of land. Roads imply lots, lots divide into parcels. He recommends KML (Keyhole Markup Language) as a relevant GIS format for this. SHP was used a lot in the past, an ASCII format, and still perhaps most-used. DXF in another option.

He moves into modeling of buildings with city-wide variation. Input data that influences the final appearance:

  • Shape of parcel/footprint (influences shape and size)
  • Population density (influences size and function of building)
  • Land use map (influences zoning)
  • Steets (influences front of building, function of building)

Stochastic rules must be used for variety, but can only be taken so far or the model will devolve to chaos. The control grammar keeps that randomness from going to far. A user-guided rule section allows for manual adjusment.

He shows an example of using CityEngine to reconstruct Pompeii. Started with street map, population density, land use. Worked with architects to build model details of building shapes, facades, doorways, windows, other such elements. Used CityEngine to generate the city-wide model based on this, down even to the detailed model of every building. Can manipulate and adjust all this in CityEngine. Proceeded further to a graphical rendering of fly-through scenes of Pompeii – nice!

Another example: reconstruct a model of a Mayan city. Started with good GIS data, collaborated with archaeologists. Good news is style was very consistent – but no formal design pattern had been previously published. They created one, and a rule set, and an elements set – each of those took one day to do. They give this CityEngine model to the architects who fill in detailed patterns of this model to produce the final model.

Shows a few more cool examples of 3D city scapes he has rendered. Height and shape of buildings was very interestingly varied.

He’s working now with CAAD – Computer Aided Architecture and Design. Working to create 3D models of the “Australian Continent” for the new Dubai World Islands. I had never even heard of them: unbelievable! Worth watching the movie linked from this page. Makes you want to burn more gas and send more money that way, doesn’t it? 😀

He’s working now on revisiting (as in 3D modelling) Le Corbusier’s Contemporary City, from the 20s.

He shows an interesting example of combining art deco style plus international style to produce a new post-modern style. This makes me think not just of creating such designs for use in games and movies, but for use by architects in creating new real-world architectural styles. It seems inevitable to me that these tools will be used that way.

He moves on to Transformations in Design (over time). Reuse, evolution and design combinations. Look for his movie example online – shows quite cool morphing from one building design to a completely different one, and interesting emergent designs that came out of that.

He gets into rule-based distribution of vegetation – he uses Greenworks’ Xfrog for biologically correct vegetation shapes. Shows some cool examples of that. I need to mail them about Phrogram. 😀

He says real-time rendering techniques of city scenes are just not there yet. More research and more power is needed.

Offline rendering works now, and he recommends RenderMan, but only with binaries and compression, because these are such huge datasets.

He recommends DSOs (Dynamic Shared Objects) for procedural shaders

Ambient Occulsion with RenderMan is the choice for exterior lighting.

Benjamin Watson on Modeling Land Use With Urban Simulation

Goals:

  • Automatic placement of buildings (not building generation yet)
  • City layouts should be convincing and typical but not completely novel
  • Controlled automation means minimal effort and maximum effect, as well as controllable by user

He overviews some of the different examples of urban planning and urban geography. His work tends to draw more from urban geography, but tends to make it more detailed.

But their simulation does have some different-from usual goals:

  • Non-existent places minimize input and prediction requirements
  • User control means processes don’t have to be completely accurate, but instead have to be convincing

Their approach is agent-based, similar to flocking or particles. They model the terrain and structures of the urban environment in this way, organically, by letting the agents do their thing over time.

EA loaned them the SimCity3000 engine for use as their rendering engine – this is cool, but of course imposes constraints: all lots are rectangular, no buildings can be placed on inclines, there are no exits on or off of multi-lane roads, etc… The point of SimCity of course is NOT to automate – the user builds it. On the other hand, it renders very nicely!

He shows a cool example – a SimCity with two urban cores, the cores containing skyscrapers. Highways look organic and natural entering and crossing the map.

He shows an example of asking students to prototype an actual neighborhood in Berlin, and another in Madrid – to show just how far user control can go with these tools.

He shows a land use map they generated, with residential, commercial, industrial and park areas, all connected by roads. He shows another example in which the user specifies an area to be filled with commercial development – because the simulation is emergent or develops over time, this kind of control is accomplished throught a “honey” model, in which the honey naturally attracts the agents of the type the user wants at that part of the map.

In general, their simulation is a gridded technique, like a GIS – it’s not granular to subparcels yet.

One advantage of their technique is that you can simulate the growth over time. This also matches better the reality of how cities (especially old ones) have evolved over time.

Input can be done starting with a blank map, or many details and constraints can be provided.

Another property of urban development is clustering – this, too, can be controlled somewhat by controlling the value of proximity.

Interruptibility means that an area can be wiped out and redone (just like in Sim City). Again, it’s “honey” that’s placed in that wiped out area that will encourage the agents to organically redo that area the way the user wants them to.

Developer agents build:

  • Structures in the environment
  • Property (res, ind, com, and park types)
  • Roads (primary, access extenders or access connectors)

Property developers move toward and build on the currently most valuable land. They might build if its empty or increase density if not. If value goes up as a result, they commit to that development.

Value is the key that drives development.

Residential values:

  • Near water
  • Near other residents
  • View (higher than average)
  • Far from industry

Commercial values:

  • Near market
  • Near customers
  • Near roads
  • Near water
  • Flat land
  • Away from parks

Industrial values:

  • Flad land
  • Near water
  • Near industry
  • Near roads
  • Far from residents
  • Far from parks

Park values:

  • Near other parks
  • Far from industry/commercial
  • Not valued by other uses
  • Hilly terrain
  • Near water
  • Near residents

value = constraints * (importance * terrain) + honey

terrain vectors relate to proximity to water, elevation, etc…

He gets into a comparison between the sim urban planning results, and real-world urban planning results. Houston versus Sim was really fairly similar. Consider what that means for a moment, please. It means that our own US values of urban planning and growth are nicely desribed by the values stated above. Makes one think, don’t it?

Future work they will add to the simulation:

  • Speed
  • Mixed use
  • Better parcels
  • Deeper road heirarchy
  • Higher level control
  • History, culture (it’s optimum for US now), “character”

The world’s most popular electronic game system?

Nintendo? Xbox? Playstation? Nope.

Here’s the full article, from Seth Schiesel at the New York Times: Windows Is Ready to Tout PC’s as Gaming Devices

The world’s most popular electronic game system is the Windows PC.

Surprised? If you shop in any of the chain electronic game shops, you probably are – because the shelf space that stores provide to PC games has been shrinking year by year, taken over by the consoles and the portable gaming systems.

I’ve never thought this shrinking space for PC games made sense – there are a lot of PCs out there, and they’re not going to go away. They are extremely capable gaming platforms and (for now) they’re the only platform that casual developers and hobbyists can produce games for.

Those shops weren’t without revenue and other reasons to do this – but I do think the equation is changing, and their assumptions will need to change. Consoles often have more power – especially for handling graphics – than an average PC. The next generation machines will make this even more so. Consoles (for now) integrate more easily with big TVs and loud stereo systems. Consoles have had a lot more advertising and marketing, and certainly have a higher “cool” factor thanks to it. And consoles, besides being cheap because they are built only to play games, have been cheap because Sony and Microsoft and Nintendo have been willing to lose money on them in order to win market share. But note that the price difference between a PC and a console is narrowing a lot with the next generation of consoles.

The article identifies the most important reason why the PC will get attention again as a game platform, and it’s something I’ve blogged about before: World of Warcraft, all by itself, is making $1,000,000,000 a year. Other online PC roleplaying games like Lineage 2 and Everquest 2 are also making lots of money, though from the numbers I’ve heard they altogether only make as much money as WoW alone.

So what else is happening?

Microsoft is also obviously interested in defending the Windows platform, in giving people more reason to use Windows and like Windows. The ubituity and popularity of electronic gaming makes it a real obvious way to give people reasons to use Windows and to like Windows.

Windows Vista, the new Microsoft PC operating system coming (hopefully) around the end of the year is obviously a good opportunity for a new marketing campaign. I’m pretty sure Microsoft wants to sell as many Vista upgrades as fast as they can. They’ve been talking about and working on this for a while, as this press release from May shows: Microsoft is bringing Xbox live to Windows Vista, and is launching a “Games for Windows marketing campaign, and a strong retail initiative to promote the Games for Windows platform.”

This article from GamePro.com last week includes an interview with Rich Wickham, the director of Microsoft’s Games for Windows division, including specifically addressing some points raised in the NYT article. For instance:

“when Windows Vista hits, Microsoft will work with retailers to make Windows games as prominent as, say, Xbox 360 games. Microsoft will also launch a huge awareness campaign to show off the latest Windows games.”

“Most crucially, games are finally getting a renewed focus in the operating system itself. In Vista, games will be prominently displayed on what Wickham called “the most valuable real estate in all of technology” — the Windows Start menu. That link will take you to the Games Explorer, a new feature that neatly arranges your installed Windows games, complete with high-res box art. Think of it as iTunes for your PC game collection.”

“Microsoft claims that destructible environments and other elaborate visual details not possible in the current Direct X 9.0 will get an enormous boost with Direct X 10.”

All this is clearly interesting from a business and technology point of view – and not only because Microsoft and Windows are so important to global business and technology. Games already make more money every year than Hollywood does. Games have and will continue to push the envelope of electronic technology. Games are already a social phenomemon, and will become even more so. Games already cross demographic lines surprisingly well, including age and gender lines – and the trend is for this to become even more true.

The importance of and interest in games that we see (my company and I) is why we chose to focus our Kid’s Programming Language on enabling beginners to program their own games – and we’re happy to say this is working very very well as a marketing and product design decision. All of this importance and interest in games also defines one of the clear opportunities for us to address the Computer Science crisis: games. Games? Sure – what better way to encourage and motivate students to learn computer science and computer programming?

Did you know that US Computer Science enrollments dropped by 60% from 2000 to 2004? Think about that one for a minute. Or two.

What the heck is a Vitruvian Phrog ?

I can do an occasional company announcement on my blog, right? This is a big one! I’m not covering a lot of background on KPL (Kids Programming Language) with this post; www.kidsprogramminglanguage.com is a great place to catch up if KPL is new to you.

KPL v 2 release candidate details

KPL v 2 has dependencies on the .NET Framework 2.0 and DirectX 9.0c, each available as free downloads from those links. They should be installed before installing KPL v 2. The KPL v 2 download is available from this link, and is 27 megabytes in size. That setup.exe will optionally download the .NET framework 2.0 if it is not already installed on your machine. A stand-alone setup program which includes both the .NET Framework 2.0 and DirectX 9.0c is available upon request. As a release candidate, this KPL v 2 build will work until November 15, 2006 – nearly four months.

The Future of KPL

KPL has been available as educational freeware for a year now and has been downloaded well over 100,000 times. To ensure KPL’s success, distribution and longevity, we knew we had to figure out how to get more resources behind it than it has had thus far. Our schedules, website and content have not been what we’d like them to be this year, because a very few of us are doing what we can with KPL at the same time that we do consulting and contract work that pay bills and put food on the table for our families. We hope you’ll find the plan as exciting as we do.

Let me start by saying that KPL v 1.1 will remain available, free, and community supported exactly as it is today. A fully functional version of KPL v 2 will also be available, free and community supported. We certainly want KPL to remain available to schools, parents, students, hobbyists and beginners who can’t afford to pay for it. But we know that many people can afford to pay a little bit for it, and so our plan is to offer a commercial version of KPL which will help to support all versions of KPL, including the freeware versions. The price is still being finalized, but our goal is to offer it for less than the price of a single console or PC game.

Partnerships are just beginning

Making a real company and a real product out of KPL also makes us more viable to established companies who might partner with us, because they can better rely on our continued success and existence. We are glad to announce we’re already working with the folks at www.CommunityServer.org, as they put together a new and much improved website for us, which will have much of the same functionality that you can see on their own site. The new site will offer to KPL users much better threaded discussion forums, file upload and download areas, image galleries, blogging, and many other important community-centric features.

We are working on a particularly exciting partnership which we’ll announce soon – stay tuned for that one!

The Vitruvian Phrog

In part because of user feedback that Kid’s Programming Language was interesting to a lot more than just kids, we’ve come up with a new name and new branding, which we will use to turn KPL v 2 into a new product:

The Vitruvian Phrog!

In September, KPL v 2 will be rebranded and launched as Phrogram, with the new site at http://www.Phrogram.com, running alongside of and linking back and forth with www.KidsProgrammingLanguage.com. We hope your first impression is of fun. We certainly have lots of fun things planned for the name and the image.

The Three Versions of Phrogram

Phrogram Express will be a completely free version, with no time limit on its usage
Phrogram will be a consumer upgrade to Express, costing less than a single console or PC game
Phrogram Academic will be a discounted version of Phrogram for students and teachers

The Phrogram Express product will be only slightly simpler than the Phrogram commercial product – we will detail those differences shortly. Phrogram and Phrogram Academic purchasers will have access to some content earlier than Phrogram Express users, and certain student- and teacher-oriented materials will only be available to Academic users.

As always, we’d like to hear your feedback!

Who Is Chad Hurley And Why Is He Smiling?

Three interesting bits of news:

1) From Reuters:

The International Herald Tribune has launched a new twist on the podcasting craze sweeping media companies with a service that instantly generates an audio version of any article in the newspaper.

Full article is here, and the IHT service is here. The service is brand new and having technical difficulties – I’m betting a scalability problem as a few million people want to try it – but it’s a really interesting idea. If it turns out to work well, I’ll blog more about it.

2) In a meeting yesterday someone said “First thing I do every day is check what kind of crazy stuff went up on YouTube overnight.” Online and on-demand video is just getting started. Yeah, Chad’s having a good time. Check out how fundamentally social the site is – one key to why it’s working.

3) They’ve been working publicly on this for a while, but it’s finally happening: Microsoft, Yahoo test IM partnership. “The enemy of my enemy is my friend” immediately comes to mind. There will always be a tension between companies – whose automatic instinct is to be proprietary in order to defend brand and market share – and customers – who would really really like it if systems were compatible, integrated and connected. This isn’t true only of IM, of course – cellular companies are having their own version of this around their “calling circle” features. As a customer, I know what I would LIKE to have happen – but the problem is, it seems like companies only consider things like this for the sake of marketing or competitive advantage.

VW Beetle, yep. 1350 horsepower, yep.

Very long and productive day today, and another one tomorrow, so this is a fun slacker blog for me.

Here’s a link to the article from Wired, and here’s the coolest photo:

Can we stop talking about pimp my ride now?

Some highlights from the article:

Ron Patrick … mounted a $270,000, 26,000-rpm, 1,350-horsepower, Navy-surplus helicopter jet turbine in the trunk.
The jet jumps the Bug’s speed from 80 to 140 mph in less than four seconds

Despite all the muscle, Patrick doesn’t race. “I’m 49, so frying some 16-year-old who just saw The Fast and The Furious doesn’t do anything for me,” he says. But he has been known to light up Northern California’s freeways on weekdays between 2 and 3 am. “More than one late-night truck driver on I-5 has been passed by a low-flying comet.”

When he entered it in the Los Angeles Grand National Roadster Show in January, he was greeted with disparaging looks and scoffs from the gearhead elite. So when the winners started revving their V-8s, Patrick responded by firing up the jet and blasting out a 6-foot-long flame. Officials screamed at him to shut it down, and then banned him for life.

My hope is that this will so obviously kick the ass of any other “pimp my ride” ride that the fad will fade away. For another decade or two anyway. Yeah, I’m an optimist. 😀

The Peace Bomb vs. Bazillions of Dollars

“The third annual Game Design Challenge, held at the 2006 Computer Game Developers conference in San Jose, asked contestants to describe a game that could win the Nobel Peace Prize. Designer Harvey Smith won with “Peace Bomb,” a networked game that would spontaneously draw people together for various constructive projects, like tree planting, cleaning up, building homes or donating money. Smith speculated, “After pooling together and trading resources, players can win on a quarterly basis, or every six months or whatever and [the] flash mob erupts around a socially constructive movement.”

The full article is in the latest issue of Escapist, and is about a lot more than Harvey’s Peace Bomb. I’m not going to mention the title of the article – what were they thinking?!? – but let me mention some points from it in an effort to get you to click through and read:

  • The blending of real-world and virtual-world economies (been happening for years now)
  • The early-but-booming in-game advertising market (MSFT just spent $400,000,000 on a company that does this)
  • The parallels between hot social networking sites (like MySpace) and hot social MMO games (like World of Warcraft)
  • Social networking sites have a lot more mainstream appeal than even the ridiculously successful World of Warcraft has had
  • Online and virtual lives do and will feel perfectly natural to current and future generations
  • Asian social networking sites are leading the way by already adding game-like features
  • Ubiquitous online gaming of the future will be based on mobile and location-aware devices – early examples already exist

Here’s some more supporting data, in case all that hasn’t made you click that article link up there yet:

Vivendi revenue from World of Warcraft, in its first year, was over $1 billion. That’s $1,000,000,000. Investment and interest in the business of online gaming is, obviously, going through the roof. Nothing like a huge pile of money to attract more huge piles of money, eh?

MySpace was founded in July 2003, and in three years has 88 million registered users, and just passed Google and Yahoo! as the world’s most visited domain. How’s that for viral success, and for internet-time?

Argo aims guns at more than iPod

From today’s Seattle Times, the coolest Microsoft news I’ve heard in a while. We shall see what we shall see:

“Microsoft is indeed developing a digital-media player to compete with Apple’s iPod, and there’s much more to the story.

“A few details trickled out last week from music companies that Microsoft is lining up to support the device. Microsoft isn’t commenting, but I was able to piece together a broader picture with some research, reporting and information from a source close to the project. What’s being developed is actually a complete line of Xbox-branded digital-media products, including a device that plays media, a software media player and an online media service.”

Here’s a link to the full article, by Brier Dudley.

Some more important highlights:

“the device is expected to go on sale by Christmas. It has Wi-Fi capability so it can connect wirelessly to home and public networks and other players.”

Umm, yeah, WiFi network access from a mobile media and game device. Yeah, that rocks.

“Argo is likely to showcase another Allard project — XNA, a new toolkit that helps game developers create titles for multiple platforms.”

This is really really really big news, though Microsoft has still been keeping a low profile about XNA. Here’s a link to the Microsoft Press Release about XNA, and then I’ll quote a few key paragraphs from it.

“XNA Studio represents a set of tools and technologies Microsoft is building to help streamline and optimize the game development process.”

“The XNA Framework contains a custom implementation of the Microsoft® .NET Framework and new game-development-specific libraries designed to help game developers more easily create cross-platform games on Windows® and Xbox 360 using the highly productive C# programming language. Using the XNA Framework, game developers will benefit from the ability to re-use code and game assets in developing multiplatform titles, without sacrificing performance or flexibility.”

“With millions of developers worldwide proficient in C#, the XNA Framework is designed to make game development significantly more approachable for independent and aspiring game developers, while enabling rapid prototyping and concept iteration.”

Put the pieces of the two stories together and consider the implications, if all this is true. Here’s how Next Generation’s headline said it: Allard’s iPod-Killer also a PSP/DS-Killer?

Yeah, the best Microsoft news I’ve heard in a while. We shall see what we shall see. Stay posted for more on this one.

At Colleges, Women Are Leaving Men in the Dust

Here’s a link to the full article, from today’s New York Times. It’s very much worth a read, and some thought.

Our professional interest in education started with our Kid’s Programming Language product. Our followon product is designed specifically for high school and university students. This interest led me to the article.

There are a few points which are directly relevant to what we’re doing with KPL and Phrogram, and I’ll start with those.

People outside of Computer Science education mostly have not heard of the “Computer Science Crisis” – but that’s a fair characterization of the fact that from 2000 to 2004, the percentage of incoming undergraduates indicating that they would major in CS declined by over 60 percent. The decline and the crisis is even deeper if one considers only female students: 0.3 percent of incoming women indicated an intent to study computer science. That’s zero point three percent. On the one hand, the article’s data might explain some of the decline in computer science. On the other hand, it demonstrates that computer science has a double problem: not only has interest in CS been declining across the sexes, but the demographic group which has most favored CS (men) has also been shrinking as part of the student population. I’ve admitted being an optimist. So this also implies one of the clear opportunities to address the Computer Science crisis (not to mention the unfortunate gender bias in software development jobs): make it interesting to girls. Fortunately, there is a lot happening now to address the computer science crisis and the gender bias, and these’ll be topics I focus on in this blog.

Another point relevant to KPL and Phrogram is the impact that video game addiction can have on one’s college education. My wife pointed out that the article wasn’t exactly fair on this point: surely there are other addictions which can have at least as much negative impact? Drinking, for instance? Clearly, any addiction is a bad thing – and probably it’s a smart relationship partner who sees one, calls us on it, and won’t accept it if we can’t change it? So, point taken, video game addiction deserves attention and awareness as do alchohol and drug and sex and food and nicotine and other addictions – and not just for college students. “Moderation in all things” somes to mind.

But given nearly-universal interest in computer and video games – this crosses gender lines very well, by the way – there is also an education opportunity in gaming. What better way to interest beginners in software design and development than to help them make their own games? KPL has already proven that this works, and we have mail from parents and teachers thanking us specifically because it does. Here’s a mail from just this week that proves the point, from Jeff Spirer in California:

“I am a technical consultant to a California K-8 charter school. The students have access to the internet for research.

“However their main endeavor is playing on-line games. In an effort to channel this interest to more educational pursuits I installed KPL1.1. I gave the students a brief explanation of programming principles and a demo on some simple graphical constructs. The response was overwhelming. Some students had simple games designed and programmed within a short period of time. Even the students that found the subject more difficult gained an appreciation of the rigorous logic required to produce results.”

Much more about this topic in the future – there are a bunch of really interesting things happening around the use of gaming in support of education.

There are many broader points raised in the article:

Isn’t it great to see women stepping up to the opportunity being presented? Doesn’t the clear success of the effort to support women’s education prove that we can successfully correct unfairness or bias in our educational system?

Is there really a “boy crisis” happening, or are we just misinterpreting the great success and progress for girls, which has not been matched by boys? This is a very important point, I think – since our actions in response are pretty likely to be bad ones if our perception of the situation is wrong in the first place.

What are the values, goals, examples and role models that our society provides to boys (and girls) as they grow up? The difference presented around planning or lack of planning – I’m sure it’s not as stark or as one-sided as presented in this article, but whether it is or not this point alone surely deserves some thought and attention?

Is there still more educational inequity for us to address based on racial and economic differences than based on gender differences?

Could parents and teacher and administrators and advisors do a better job of preparing incoming college students to deal with freedoms and responsibilities that a lot of them never encounter until they get to a freshman dorm?

There is increasing interest in the effectiveness of single-sex primary and secondary schools and classrooms – but it wasn’t long ago that many single-sex universities opened up to the other sex. Anyone have research and data on this point?