Q&A: Lytro Exec Charles Chi Talks Light Field, Battery Life, and Licensing

Editor's note 2/29/2012: Lytro's light-field camera started shipping to customers this week. You can read our hands-on preview of Lytro's camera here.

Lytro executive chairman Charles Chi
Lytro executive chairman Charles Chi
A few weeks back, I worked on an article regarding the increasing popularity and versatility of CMOS imaging sensors, as well as the major reasons why the sensor technology is finding its way into everything from DSLRs to cell phones. For the article, I spoke to a number of technical experts and advisors from Canon, Nikon, and Sony, as well as Lytro executive chairman Charles Chi.

For a small company whose first-generation camera has yet to come to market (it's available for preorder on the Lytro site, and due in the first few months of this year), Lytro is making big waves. There's a major reason for that: Lytro's camera will be the first consumer device that allows users to focus (and refocus, and refocus...) an image after it's shot, employing what is called a light-field sensor.

I used only a small portion of my interview with Chi in the CMOS story, but given the massive amount of buzz surrounding Lytro in the past few weeks--the company won Last Gadget Standing honors at this year's CES, its "shoot first, focus later" camera will be available to the masses soon, and stories are trickling out that Lytro CEO Ren Ng met with the late Steve Jobs to discuss using the company's imaging technology in future iterations of the iPhone--the entire interview transcript may be of interest to our readers.

What follows is the entire phone conversation I had with Lytro executive chairman Charles Chi, in which he discusses the sensor and long-life-battery technology embedded in each Lytro camera, the possibility of licensing Lytro's camera module out to cell phone makers, and the future of the sensor industry.

PCWorld: Would Lytro's light-field capture be possible with a "slower" sensor technology, such as CCD?

Charles Chi: To capture light field, your starting point is a traditional sensor. A light-field sensor is tolerant to CMOS or CCD--in fact, it doesn't matter whatsoever. Because of the way we capture light, we're looking to capture the actual rays of light. We can actually take sensors that have more defects in them than a traditional camera can, so that provides even more flexibility for the sensor vendor.

What we do is a custom package with the sensor, a package with what we call a micro-lens array. It fits right on top of the sensor itself, and that's what creates the "light-field" sensor, in addition to a lot of software and processing that comes afterwards.

All the benefits that you would usually get from a CMOS sensor, we benefit from as well, in terms of fast read-out time. More important, I think there's so much research being done on different types of pixels for CMOS sensors, so that's the type of technology you'd want to be on. The industry is doing so much development there.

In our first product, we're using a CMOS sensor. We've done internal prototypes using both CMOS and CCD, but our first product is based on a CMOS sensor.

PCW: The traditional view is that CCD sensors produce "better" image quality, while CMOS sensors are more versatile and produce faster output. Has that argument changed with recent CMOS improvements?

Chi: I think that definitely has been the traditional view, but there's been so much development in the CMOS sensor. A lot of image-quality work, and a lot of the top-end cameras, are based on CMOS sensors. So while there may be still an advantage for CCD, I think the benefits [of CMOS] far outweigh any downside that still exists. On top of that, if you're like us and facing a lot of technological innovation challenges, it's good to be riding the CMOS horse.

PCW: As for energy efficiency, Lytro is touting its first-generation camera as having an extremely long-life battery--it will last until the internal memory fills up. Is that more of a function of the chip technology, or the battery itself?

Chi: It's a combination of several things. One, we do have a custom battery for the camera, and we've really optimized the space that's available for that battery. We've really maximized the amount of battery we can put into the camera. And because it's a "captive" battery, not a replaceable battery, we can get that much more battery cell life out of the camera, because it doesn't have a case and so forth. A good example of that would be a product like an iPad, where it's so thin: They can get away with that and still have long battery life by using a captive battery. We've done the same thing.

The second piece is an optimization of the components in trying to minimize power consumption. Because we don't have to focus with every shot like a traditional camera, we're not running an autofocus motor. So that provides a lot of benefits for battery consumption.

And then lastly, we do an overall optimization of the system. We power things off and on intelligently in order to extend battery life as much as possible. And the net effect of it is that you can shoot the entire capacity of the memory with one charge.

PCW: Do you see your company licensing the imaging technology out to other camera and camera-phone manufacturers?

Chi: We feel that there's a lot of technology that we can apply to some very differentiated, very interesting, and very exciting products. We feel that we have the capital to do that, the capability in the company to do that, and also the vision to execute on the program. So we're very focused on building our own branded cameras and product line to sell in the marketplace.

If we were to apply the technology in smartphones, that ecosystem is, of course, very complex, with some very large players there. It's an industry that's very different and driven based on operational excellence. For us to compete in there, we'd have to be a very different kind of company. So if we were to enter that space, it would definitely be through a partnership and a codevelopment of the technology, and ultimately some kind of licensing with the appropriate partner.

On how sensors and their uses will evolve in the near future:

Chi: If you look at the sensor business, they're really in an interesting technological space. On one hand, implementing more resolution into sensors doesn't have very much value in traditional sensors. So whether it's 14, 16, 18 megapixels, they're hitting a ceiling of resolution that has value for a consumer camera. It might have value for military applications, but for high-volume consumer cameras, they're really topping out on useful resolution.

On the other hand, for smartphones, they're being pushed to creating a smaller and smaller die because they're trying to fit it into a small space on a cell phone, or ultimately a smaller die usually translates to lower costs. They're hitting limits there, too, because a smaller die means a smaller pixel, and when you have a smaller pixel you capture less light. You get into some fundamental limits in terms of how small a pixel you can get to.

So the industry is in an interesting space, because they have a floor of how small a sensor can be and a ceiling of how much capacity they have. It's kind of like being in the memory business, where you can't make the chip any smaller and nobody wants more memory. What's been really helping the business is very high growth in smartphones, which has been driving more unit volume.

What's interesting is that, for a light-field camera, we don't capture the same kind of information as a traditional sensor. What we're trying to capture is rays of light, and the more of them we have, the more interesting things we can do for imaging. There's a lot we can do for lens correction, aberration; we can do much more dramatic 3D effects, refocus effects. There's a very long list as a result of having more and more sensor resolution. Unlike traditional cameras, which have a [usable resolution] limit, there is no diminishing return on resolution for a light-field camera until you get to a billion pixels.

If you picture this, you can take a large-die sensor from a DSLR, and if you print it with the pixels you would find in a cell phone module, which are much smaller, you could easily get to 100 megapixels today, with today's technology. Obviously, there's other engineering challenges in terms of heat and power consumption and everything, but to create a 200-megapixel sensor--or what we call a megaray sensor--is all within the industry's reach today.

With light-field capture, we could spur a new round of innovation in the sensor business, and take off these technological ceilings that these cameras have otherwise imposed on the sensor business.

PCW: So greater pixel density and smaller pixel size don't impact image quality with a "megaray" sensor, as they do in traditional cameras?

Chi: We are definitely much more tolerant to noise in a sensor, and much more tolerant to defects in a sensor, and that's because we use multiple pixels and multiple light rays to create the end image. So if you're missing one light ray, or even a couple in a row, that's okay. In a traditional camera, that would very much show up on the final image.

Product mentioned in this article

(1 items)

Subscribe to the Digital Photo Newsletter