![build a lot 4 demo build a lot 4 demo](https://eaassets-a.akamaihd.net/wwce-hc-aem-dispatcher/eahelp/articles/the-sims/use-the-sims-4-gallery-download-content.jpg)
It’s all very geeky photography stuff, but it has some pretty profound real-world applications. If the neural network is able to use the 20 original images to predict what the last 10 images would have looked like, the algorithm has created a pretty good 3D representation of the item you are trying to capture. The final 10 photos are used as a “checksum” - or the answer to the equation. We would take 30 photos of a car, then show 20 of them to the neural network,” explains Jain.
![build a lot 4 demo build a lot 4 demo](https://www.yrjie.com/images/b-pc-23793-en_screen3.jpg)
So we came up with a new way of doing things. But the issue is that because we are starting from photographs, we don’t have enough data to do that type of rendering. if you have the rendering equation - you can do Physics Based Rendering (PBR). If you have perfect data about the shape of an object - i.e. “We decided to assume that we can’t get an accurate mesh from a point cloud, and instead are taking a different approach. The company decided to approach the issue from another angle. Even today, that problem remains a fundamentally unsolved problem,” Luma AI’s founder Amit Jain explains, making the point that “inverse rendering,” as it known in the industry. You end up with a mesh - but to get a good-quality 3D image, you need to be able to construct high-quality meshes from noisy, real-world data. What used to happen and what people are doing with photogrammetry is that you take some images, and then you run some long processing on it, you get point clouds and then you try to reconstruct 3D out of it. “What is different now and why we are doing this now is because of the rise of these ideas of neural rendering. Spoiler alert: It has never looked particularly great - but with new technologies come new opportunities, and that’s where Luma comes in. In other words: you can see a 3D image of the product you’re considering in a VR headset.įor any of us who’ve been following this space for a while, we’ve seen for a long time startups trying to do 3D representations using consumer-grade cameras and rudimentary photogrammetry. Best of all, because the captured image is a real 3D interpretation of the scene, it can be rendered from any angle, but also in 3D with two viewports, from slightly different angles. The hope is to drastically speed up the capture of product photography for high-end e-commerce applications, but also to improve the user experience of looking at products from every angle. The company has developed a new neural rendering technology that makes it possible to take a small number of photos to generate, shade and render a photo-realistic 3D model of a product. Luma - founded by engineers who left Apple’s AR and computer vision group - wants to shake all of that up. This is typically done by taking a number of photos of a product from all angles, and then playing them like an animation. When online shopping, you’ve probably come across photos that spin around so you can see a product from all angles.