Four months ago Galen and I sawed down a white ash tree to get some green sapwood to bend into the keel of a model ship. Yesterday Galen presented the completed Viking warship at his school, as each student presented his or her “Inquiry” project. The consensus is that Galen enjoyed his project almost as much as his Inquiry mentor (guess who?). The video below tells the story of the construction of the model.
I awoke this morning to see that not a leaf on the hornbeam was fluttering, and my head fell back to the pillow in dismay. This was a sure sign that I had a new hobby. It was the KAPer’s lament: no wind. I had flown a camera on a kite for two days in a row, and the thought of a calm day was discouraging. But I had started a stitch of yesterday’s aerial panorama before I went to bed, so I got up two hours before Galen had to be at school to check on it.
All the parts before assembly
I built my first KAP rig last week from one of Brooks Leffler’s kits. I have never built anything with servos and dip switches and carbon fiber legs, but I got to use my Dremel tool, so it felt safe. Brooks has designed an elegant system for suspending a camera so that it can point in any direction. The pointing and shutter release are done either at predetermined intervals (autoKAP), or by radio control from the ground. The kit I built had servo motors for both panning and tilting and electronics to automatically point in as many as 76 directions and take photos potentially covering a downward-looking half-spherical view. By replacing the tiny circuit board with a radio receiver, the motors and shutter could be controlled via a transmitter on the ground. It is based on the RC airplane/car/boat/helo standards, so compatible equipment is readily available.
It took a couple of weeks for Jeffrey Warren’s message to sink in. At first I thought his workshop on balloon aerial photography at the Fine gigapixel conference in November promoted a fringe pursuit – lofting cameras on tethered helium balloons to make better maps than were currently available. But this pursuit emerged from the elegant convergence of modern camera technology and traditional lofting methods (balloons and kites). It was now possible for anyone to make good, current “maps” from stitched, low-elevation vertical photographs. Jeffrey is committed to inventing workarounds to the technological and financial obstacles that would otherwise put this process out of reach of the communities that might benefit from good maps. This was the focus of his thesis at MIT, and he has had great success bringing communities together around these new map images and the experience of making them. He has made the objective so compelling, and the process so simple and inexpensive that I soon realized I had to try it.
Today I presented some preliminary results of a five-year winter wildlife tracking project my town’s conservation commission has just completed. I was part of a workshop on wildlife connectivity at the Vermont Statewide Conservation Conference in Rutland.
We had some spatial analysis and mapping of the tracking done by Kevin Behm at our county’s regional planning commission, and I wanted to display the mapped results in a compelling way. Google Earth was an obvious candidate for display, but driving Google Earth for a live audience is asking for trouble unless the show is simple. So I used the Movie Maker tool in Google Earth Pro to record three minutes of video highlighting the non-simple results and their context.
Galen’s Uncle and Aunt gave him a nice set of N scale model trains a few years ago, and we have added track and accessories since then. For his birthday in May, Galen got three building kits from GCLaser.com. Although there are lots of HO scale plastic model kits of buildings, N scale is too small to work well for the molded plastic kits, and the available kits are 2-4 times the price of the HO kits even though they are smaller.
The GCLaser kits are all wood. The pieces are precisely cut by laser from micro plywood. The material and quality control are excellent, and the parts fit together beautifully with almost no shaping or cleaning. They are more expensive than plastic HO kits, and much harder to assemble, but the results can be impressive. When I received the kits it was obvious that they weren’t quite appropriate for Galen, so I selflessly stepped up and assembled them myself.
Google has digitized the text of five million books. The old ones are in the public domain and you can read them online at http://books.google.com/. Most of the more recent ones are still protected by copyright law, but that law did not anticipate the ingenuity at Google. The individual words and phrases in all those books are now in a huge database that anyone can search. Maybe you can’t read every book online, but you can learn how book writers have used the language over the past two centuries at http://ngrams.googlelabs.com/. This allows a very new kind of literary research – answering questions without reading. Although it turns out that some reading is still required.
After it warmed up a bit yesterday, I tried out my new digital field protocol on a wildlife tracking transect behind my house. My goal was to record the identity, quantity, and location of large animal tracks in the snow which crossed the path I was walking (my “transect”). I am trying to develop a protocol for purely digital collection of these data.
Three types of data must be collected: date, location, and observation. The date (and time) is easy because most digital data has a time stamp. Collecting location data requires a GPS enabled device. To collect the wildlife observation information in digital form requires manual data entry (keypad or touchscreen) or audio or video collection. I have seen some smart phone apps which could be bent to this purpose, but I don’t have such a phone, so the easiest route for me is audio, although this will require later translation to textual data.
[Update: I abandoned this three-device protocol after a few trials and now use only the GPS to make waypoints for each observation. The new method is described here.]
Linking the GPS data with the audio observations is the hard part. There are mature protocols for attaching GPS coordinates to image files, but not to audio files, although it should be easy to implement this on a smart phone. I used a digital photo as a link between the GPS data and the audio file. A key component of my protocol is a program which attaches GPS coordinates to photo files and can also associate an audio file with each photo. The program can also create a KMZ file or GIS shapefile which includes the georeferenced audio files. The program is RoboGeo which costs $80. This is the program that I use to georeference photos that I have taken while the GPS is recording a tracklog.
A few days ago the Wired Science blog at Wired.com embedded the gigapans from the juried gallery show at the Carnegie Museum of Natural History. The same content also appeared at the Wired Science Japan site, so I used some online translators to see what had been written in Kanji about my hummingbird gigapan.
Here is the text that first appeared at the Wired Science blog:
Don’t let the 40 or so hummingbirds in this panorama fool you. There are really only two. Photographer Chris Fastie called it a “perplexing distortion of reality.” He took 78 photos over the course of a few minutes, then selectively merged them to capture multiple feeding and flying positions of the birds. “Rarely will the local male allow birds other than his mate to use a food source in his territory, so a feeding flock like this is impossible,” Fastie wrote on GigaPan.org.
This caption is a bit of a “perplexing distortion” because: (1) there are only 28 hummingbirds in the image, not “40 or so,” and (2) the 78 photos taken by the Gigapan imager did not include any birds. An additional 28 photos of hummingbirds (and two of insects) were pasted onto the stitched panorama. This misinformation is partly my fault because my original caption at gigapan.org was not very explicit. So the people hired to do the translation were already at a disadvantage, like the third person in a game of “telephone.”
The Fine International Conference on Gigapixel Imaging for Science is winding down and I am really looking forward to the cocktail reception when today’s poster session ends. There is also going to be a raffle for a Gigapan Epic Pro, so there is still much to look forward to.
It has been a real joy to meet lots of people who I knew only through their work online at gigapan.org, many others whose work I hope to know soon, and all the media people who might be incorporating gigapans into their work. It was tremendous fun to see dozens of members of the gigapan community whom I met 18 months ago at my first Fine Outreach for Science Workshop. I have really enjoyed interacting with many people who are more obsessed with gigapixel imaging than I am.
The proceedings papers are now online at http://gigapixelscience.gigapan.org/. A higher resolution PDF of my paper is here. All the presentations were videotaped, so maybe they will be online at some point so I can see the concurrent talks I missed. [UPDATE: Video of my presentation at YouTube.]
The Prezis for my conference talk and the one for my Fine Outreach for Science talk are available online at Prezi.com. These are somewhat sparse in the sense that they are not very self-explanatory, but you might glean something from them if you attended my talks. Here is the motion bubble chart of my gigapan history that I used in the FOFS Workshop. And here is the kml file of Miss Pixie so you can see the Google Earth verification of the map I made of her locations. Here is a pioneering paper by Adam Dick et al. about mapping trees from 360° panoramas.
Thanks to the GigaPan teams for the tremendous effort they put into this event. It was a huge success.
Reese, Leah, Doug, Lindsay, Emily, and Audrey evaluate the old growth status of the surrounding forest.
I joined the University of Vermont Field Naturalist and Ecological Planning graduate student field trip yesterday to the 22,768 acre Giant Mountain Wilderness area in the Adirondacks. The trip was led by Alicia Daniels and focused on the plant community response to a dramatic 1963 landslide and outburst flood which deposited a mile-long debris flow in the Roaring Brook valley. The debris flow and successional puzzle were intriguing, but I was stunned by the old forests on the valley sides. There were hemlocks, red spruce, and sugar maples everywhere that appeared to be 300-400 years old. One increment core of a spruce and one cross section of a hemlock across a trail confirmed 350 year-old trees, but it looked like many others were that old. I have never seen such a majestic stand in Vermont.
I never heard the term “ledge” used as a synonym for bedrock before I moved to Vermont. But I once heard a guy in Maryland confirm it was bedrock by saying “Yeah, I think that’s a piece of the state.”
Here is a Google Earth KML file of two ledgey places I visited this week. One was made of Monkton quartzite with some dolomite strata and a rich, unusual plant community, and I accompanied some experts who identified three state endangered species. The other was made of Cheshire quartzite with somewhat less calcium available, and I recently found a lovely grove of pitch and red pines there. A new gigapan of that Pitch Pine-Oak-Heath-Rocky Summit community is included in the KML file.
You can see photos, GPS tracks, and the gigapan by downloading the KML file into Google Earth, or by clicking the link (below the break) to open it in a new browser window, or just use the embedded window at the bottom of the post (Your computer must have the Google Earth browser plugin installed).
I have been collecting data about the gigapans I upload to gigapan.org ever since I noticed some unexplained behavior in the View counts and Explore Scores of my first public gigapans. Unlike YouTube, Gigapan does not make archival user data publicly available, so it has to be independently collected. Kilgore661 has been doing a great public service by collecting these data for all gigapans for about three years, and you can explore his archive here and use his nifty graphing tools. I have more than a year’s worth of slightly denser data on my own gigapans. Graphs of these data are wildly revealing about the nature of Explore Scores and the inherent differences among gigapans in how they accumulate Views. I hope to show some of these results at the Fine Outreach for Science Workshop in November.
The number of Views and the Explore Score for four of my gigapans for their first three months. Click to enlarge.
Unlike Kilgore661, I wasn’t smart enough to use the gigapan API to automate this process, so I have been screen scraping and I just had to stop. The gigapan API is essentially undocumented, but thanks to Miriam at gigapan and Will at Fastie Systems, I now have a tool that collects the pertinent data on all of my gigapans and makes it easy to paste it into Excel. You can see the tool in action here, and learn how to install it on a Web page to fetch the information about your own gigapans.