GLOBAL – Could you build a bridge by setting some sliders on your smartphone and waiting two seconds for a highly complex calculation? This team working for an NGO in El Salvador did.
[do action=”boxout”]Scientists at the EPCC, the supercomputing centre at the University of Edinburgh, estimate it would take a human brain 30 million years to replicate a 1 second calculation by a supercomputer.[/do]
The Ranger supercomputer is the 17th fastest in the world. It’s a Texas computational mega-beast with 62,976 processor cores reaching a peak performance of 580 teraflops, memory of 123 terabytes and disk storage of 1.73 petabytes.
Computer speed is normally measured by researchers in the number of floating-point operations per second (flops). Ranger has a peak performance of equal to 5.8 x 10^14 flops.
By comparison, smartphones do about 100 megaflops = 10^8 flops. So, you could say that Ranger is 5.8 million times faster.
With power like that it’s hard to believe you could perform the same calculations on a smartphone with similar spec to a Nokia Lumia 800 or N9 as on a supercomputer – but researchers in the US have done just that.
A team from the Massachusetts Institute of Technology (MIT) and Harvard are about to announce a widely available app that can provide real-time and reliable simulations to the same problems – from a smartphone anywhere on the planet.
That’s never been more relevant: Last year sales of smartphones overtook personal computers for the first time. A report by Canalys showed that in 2011 the number of smartphones shipped grew by a massive 62.7% – with 488 million smartphones compared to only 415 million PCs.
Sarah Perez on Techcrunch responded to the figures, asking “When will the post-PC era arrive? It just did.” Remember your kid isn’t going to get a desktop, she added – they’re getting a tablet, or a phone because “smartphones are PCs, too. The most affordable ones.”
The future of computing may be a little more complicated than that, but in essence we’ll be doing more of what we used to do on a computer – on our phones instead.
With apps like the one developed at MIT that could mean essential engineering and infrastructure – like building bridges – could be made more easily available on construction sites, and to communities in the developing world. The app could also be used for landmine detection, and determining the optimal shape for buildings.
One of the project leaders, David Knezevic, said:
“Smartphones are the new frontier in computational engineering.”
Knezevic, who is now at Harvard, first developed the app as a post-doctoral associate in mechanical engineering at MIT, working in the lab of Professor Anthony Patera.
“At the moment all of this work is done on desktops, or supercomputers,” Knezevic said. “Its time consuming, and very expensive.”
The research team used the Ranger supercomputer at the Texas Advanced Computing Center to generate a small “reduced model” which was transferred to a smart phone.
This kind of model reduction has been used before; in October we wrote about the team of Danish scientists who have developed the world’s first mobile brain scanner using a Nokia smartphone.
In this case, Knezevic believes the approach of the MIT team is distinguished by rigorous “error bounds” created using mathematical theories devised in Professor Patera’s labs. This tells a user the range of possible solutions, and estimates accuracy.
For Knezevic the hardest part of the project was developing those mathematical algorithms: “It took a decade of work in the MIT research group.”
The app uses a range of parameters that have been set by the user to draw data from the original supercomputer simulation. By moving the slider bars on the smart phone, someone who wants to build a bridge can estimate stresses by changing the density of the material, or the thickness of the pylons.
The result is a 3D visualization, developed at MIT by Phuong Huynh. Huynh, who is originally from Vietnam, said: “What was really challenging is that a smart phone doesn’t have a lot of memory, so we needed to work out how to extract only the data needed to create what is visual. In other words, for the visualization, you don’t need to use all the data relating to what is inside – only what is on screen.”
It all began as a brainstorming session, according to Huynh. That session turned into a research project – and is now soon to be developed more widely as an app on different smartphone platforms. Knezevic can’t reveal more yet, but expects full details will be announced within the next few months.
It will be the conclusion of years of work turning the most complex calculations into something simple, and user friendly. When you explain that it can solve, in a second, a problem that would take two hours on a supercomputer, Knezevic says that people, “instantly understand what it is all about.”
Knezevic adds that it is important to remember that the original simulations will still be generated on a supercomputer. Smarter smartphones don’t make computers redundant – a recent article in WIRED by Robert McMillan highlighted the work of Wu Feng from Virginia Tech in making smaller supercomputers that can operate as business desktops, while Jason Perlow pointed out on Zdnet that ‘post-PC’ probably means greater integration between smartphones, tablets, desk tops and cloud computing.
If the nature of those relationships have yet to evolve, everyone agrees that the future will provide a more integrated, and mobile, experience – with small devices providing real answers to big problems.