iHub By Jimmy Gitonga / September 7, 2012
The iHub Cluster
With the help of Google Africa and Intel Corporation, the iHub has embarked on a journey whose immediate realization is the setting up of a High Performance Computing (HPC) system, which we are calling the iHub Cluster.
The best part is that it was an idea was presented in 2011 from an iHub community member, Idd Salim and by December it was clear it was going to happen. It became a matter of ‘when’ not ‘if’.
It was at the turn of the Millennium when the term “Super Computer” meant a number of networked computers or a desktop computer that can achieve 1 Teraflop. Obviously we have moved on from here but the term has become superfluous in meaning. There are new terms such as High Performance Computing (HPC) have come up to mean more or less what people envisaged then when “Supercomputing” was talked about. Today we talk of computing clusters.
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex weather simulation application. 1
Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve a small number of somewhat large problems or a large number of small problems, e.g. many user access requests to a database or a web site. Computer system architectures, whose purpose is supporting many users for routine everyday tasks, may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem. These are normally labeled as data centers.
The Super Computer
In general, the speed of supercomputers is measured and benchmarked in “FLOPS” (FLoating Point Operations Per Second), and not in terms of MIPS, i.e. as “Multiple Instructions Per Second”, as is the case with general purpose computers. These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand “TFLOPS” (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand”PFLOPS” (1015 FLOPS, pronounced petaflops.)
The current entry point into the TOP 500 supercomputers is 80 teraflops. The top super computer as of November 2011 is USA’s IBM built “Sequoia” computer that runs on POWER processors stands at 20.13 petaflops (20132.7 teraflops).
Academic and Communal Institution Super Computing
In 2003 Virginia Tech crashed into the Top500 at number 7 with a super computer built if Apple Xserve G5 machines. It ran at 22 teraflops. “System X” (10) as it was known cost US $ 5 million to put together but was bringing in US $ 20-25 million annually.
From December 2011, Virgina Tech upgraded their super computer to one called “HokieSpeed” that is built of the “off-the-shelf” technology that any institution or person can quickly build. “HokieSpeed” entered at 96 on the Top500 and cost US $ 1.4 million to build.
Virginia Tech’s “HokieSpeed” have done a number of things well:
(a) It is cost effective. It has 209 nodes, each node with 2 Intel Xeon E5645 6-core CPUs and 2 NVIDIA M2050/C2050 GPUs on a Supermicro 2026GT0TRF Motherboards. All this for US $ 1.4 million for 120 teraflops.
(b) It has used standard off-the-shelf hardware and open source software
(c) It is top in the “Green List”, which is a list of super computers that are most efficient in power consumption.
So in essence for US $ 42,000, one should be able to get 6 nodes of this quality at the then prices and at least 4 teraflops. This would not get into the Top500 but all the technology would be HPC compliant and hit a number of milestones.
Placing iHub in the HPC Global Community
During the research into the current HPC arena and how and what iHub can do to get into this space, a number of questions arose that would look into strategic and tactical positioning that would place iHub in good stead among the HPC global community.
- What is iHub’s ultimate goal in building the “Cluster”?
- Considering that the cluster can and should pay for itself in 1 to 2 years, what systems and personnel would be required for this outcome to be achieved?
- What benefits will the iHub community and academic institutions interested in utilizing the iHub Cluster system receive?
- Will the cluster in the future be open to serve the needs of the corporate and industrial sectors such as metrological and oil/extraction industries or international research requests from the African region and the Middle East? Considering that there is a business case in development, does the iHub have the necessary structures to cater for this?
- In order to be a respectable HPC system, one needs to enter HPC Top500 list and would need to spend around US $ 500,000 to enter the 80+ teraflops range. This figure will clearly fall in one year’s time. Is this a reasonable future aim?
Some of the answers to these questions will only be realized by actually doing this. And in the words of another iHub member, “BRING IT!”
And a more interesting question is, what would it take to break into the 80+ teraflop range and be a bona fide super computer for the iHub?
Image credits: http://www.top10upper.com/fastest-computer/
$i = 1; ?>
Michael Pedersen at 13:34:27PM Friday, September 7, 2012
Keep in mind that by the time a plan to reach 80TFLOPS have been implemented the entry point will most likely have gone up to 120TFLOPS.
The TOP500 is generally “behind reality” since it is only showing those who are actually able to do an official submission before the yearly deadline (as I understand it). A lot generally happens in a year in this area.Reply
jimmy_gitonga at 00:22:54AM Saturday, September 8, 2012
It true, the Top500 is a moving target and planning for submission to enter the list is always to go higher that the current entry point.
The Green500 is now the list that really matters. Especially being in a place where the energy supply is constrained, that list is more important to us.Reply
The iHub Cluster - InnovationAfrica at 21:55:04PM Sunday, September 9, 2012
[...] Go to Source Related Posts:The iHub UX Lab and Supercomputer ClusterFrom Startup to LeaderWingu TechnologiesICT Hubs Model: Understanding the Key Factors of the iHub ModelHow to Fail /* [...]Reply
Collaborative Introspection - A Lesson From The Kenyan Tech Intelligentsia | otekbits.com at 11:17:15AM Wednesday, October 17, 2012
[...] concentrations of developer and technical talent on the continent. They are currently building Africa’s first super computer. Its people have shown themselves to be relatively early adopters of useful technology. Nairobi is [...]Reply
Eric Schmidt at the iHub | *iHub_ at 18:04:24PM Tuesday, January 15, 2013
[...] of the funds from Google went into setting up the iHub Cluster . The supercomputer is dubbed ISIOLO – [...]Reply
iHub Cluster Goes Live | *iHub_ at 10:43:49AM Monday, January 21, 2013Reply
Nairobi Has A Serious Tech Hub And May Become The African Leader - Eric Schmidt, Google Chairman at 17:09:35PM Tuesday, January 22, 2013
[...] Since its inception, Google has been one of iHub’s major partners, collaborating in projects, working with the tech community and funding the expansion of the iHub space, and in recent times, the build out of the UX Lab and the iHub Cluster. [...]Reply
Power to Mortals | *iHub_ at 14:34:18PM Friday, February 22, 2013
[...] The good news is – we have one right here in Kenya. [...]Reply
- iHub Cluster
- iHub Consulting
- iHub Research
- iHub Robotics
- iHub UXlab
- StartUp Series Episode 1: The Hustler
- 10 signs your work placement or internship is a ‘keeper’
- THE SINAPIS ENTREPRENEURSHIP PROGRAM
- [ RSVP]Mobile + Big Data – Harnessing the Power of a Contextual Experience
- m:lab East Africa and SoftLayer to work together on Mobile Impact Ventures Program