. Earth Science News .
TECH SPACE
A programming language for hardware accelerators
by Rachel Gordon | MIT CSAIL
Boston MA (SPX) Jul 15, 2022

stock illustration only

Moore's Law needs a hug. The days of stuffing transistors on little silicon computer chips are numbered, and their life rafts - hardware accelerators - come with a price.

When programming an accelerator - a process where applications offload certain tasks to system hardware especially to accelerate that task - you have to build a whole new software support. Hardware accelerators can run certain tasks orders of magnitude faster than CPUs, but they cannot be used out of the box. Software needs to efficiently use accelerators' instructions to make it compatible with the entire application system. This translates to a lot of engineering work that then would have to be maintained for a new chip that you're compiling code to, with any programming language.

Now, scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) created a new programming language called "Exo" for writing high-performance code on hardware accelerators. Exo helps low-level performance engineers transform very simple programs that specify what they want to compute, into very complex programs that do the same thing as the specification, but much, much faster by using these special accelerator chips. Engineers, for example, can use Exo to turn a simple matrix multiplication into a more complex program, which runs orders of magnitude faster by using these special accelerators.

Unlike other programming languages and compilers, Exo is built around a concept called "Exocompilation."

"Traditionally, a lot of research has focused on automating the optimization process for the specific hardware," says Yuka Ikarashi, a PhD student in electrical engineering and computer science and CSAIL affiliate who is a lead author on a new paper about Exo.

"This is great for most programmers, but for performance engineers, the compiler gets in the way as often as it helps. Because the compiler's optimizations are automatic, there's no good way to fix it when it does the wrong thing and gives you 45 percent efficiency instead of 90 percent."

With Exocompilation, the performance engineer is back in the driver's seat. Responsibility for choosing which optimizations to apply, when, and in what order is externalized from the compiler, back to the performance engineer. This way, they don't have to waste time fighting the compiler on the one hand, or doing everything manually on the other. At the same time, Exo takes responsibility for ensuring that all of these optimizations are correct. As a result, the performance engineer can spend their time improving performance, rather than debugging the complex, optimized code.

"Exo language is a compiler that's parameterized over the hardware it targets; the same compiler can adapt to many different hardware accelerators," says Adrian Sampson, assistant professor in the Department of Computer Science at Cornell University. " Instead of writing a bunch of messy C++ code to compile for a new accelerator, Exo gives you an abstract, uniform way to write down the 'shape' of the hardware you want to target.

Then you can reuse the existing Exo compiler to adapt to that new description instead of writing something entirely new from scratch. The potential impact of work like this is enormous: If hardware innovators can stop worrying about the cost of developing new compilers for every new hardware idea, they can try out and ship more ideas. The industry could break its dependence on legacy hardware that succeeds only because of ecosystem lock-in and despite its inefficiency."

The highest-performance computer chips made today, such as Google's TPU, Apple's Neural Engine, or NVIDIA's Tensor Cores, power scientific computing and machine learning applications by accelerating something called "key sub-programs," kernels, or high-performance computing (HPC) subroutines.

Clunky jargon aside, the programs are essential. For example, something called Basic Linear Algebra Subroutines (BLAS) is a "library" or collection of such subroutines, which are dedicated to linear algebra computations, and enable many machine learning tasks like neural networks, weather forecasts, cloud computation, and drug discovery. (BLAS is so important that it won Jack Dongarra the Turing Award in 2021.) However, these new chips - which take hundreds of engineers to design - are only as good as these HPC software libraries allow.

Currently, though, this kind of performance optimization is still done by hand to ensure that every last cycle of computation on these chips gets used. HPC subroutines regularly run at 90 percent-plus of peak theoretical efficiency, and hardware engineers go to great lengths to add an extra five or 10 percent of speed to these theoretical peaks. So, if the software isn't aggressively optimized, all of that hard work gets wasted - which is exactly what Exo helps avoid.

Another key part of Exocompilation is that performance engineers can describe the new chips they want to optimize for, without having to modify the compiler. Traditionally, the definition of the hardware interface is maintained by the compiler developers, but with most of these new accelerator chips, the hardware interface is proprietary. Companies have to maintain their own copy (fork) of a whole traditional compiler, modified to support their particular chip. This requires hiring teams of compiler developers in addition to the performance engineers.

"In Exo, we instead externalize the definition of hardware-specific backends from the exocompiler. This gives us a better separation between Exo - which is an open-source project - and hardware-specific code - which is often proprietary. We've shown that we can use Exo to quickly write code that's as performant as Intel's hand-optimized Math Kernel Library. We're actively working with engineers and researchers at several companies," says Gilbert Bernstein, a postdoc at the University of California at Berkeley.

The future of Exo entails exploring a more productive scheduling meta-language, and expanding its semantics to support parallel programming models to apply it to even more accelerators, including GPUs.

Ikarashi and Bernstein wrote the paper alongside Alex Reinking and Hasan Genc, both PhD students at UC Berkeley, and MIT Assistant Professor Jonathan Ragan-Kelley.

This work was partially supported by the Applications Driving Architectures center, one of six centers of JUMP, a Semiconductor Research Corporation program co-sponsored by the Defense Advanced Research Projects Agency. Ikarashi was supported by Funai Overseas Scholarship, Masason Foundation, and Great Educators Fellowship. The team presented the work at the ACM SIGPLAN Conference on Programming Language Design and Implementation 2022.

Research Report:"Exocompilation for productive programming of hardware accelerators"


Related Links
Computer Science and Artificial Intelligence Laboratory (CSAIL)
Space Technology News - Applications and Research


Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once


credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly


paypal only


TECH SPACE
Faster computing results without fear of errors
Boston MA (SPX) Jun 10, 2022
Researchers have pioneered a technique that can dramatically accelerate certain types of computer programs automatically, while ensuring program results remain accurate. Their system boosts the speeds of programs that run in the Unix shell, a ubiquitous programming environment created 50 years ago that is still widely used today. Their method parallelizes these programs, which means that it splits program components into pieces that can be run simultaneously on multiple computer processors. ... read more

Comment using your Disqus, Facebook, Google or Twitter login.



Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

TECH SPACE
UN Security Council calls on members to stop arming gangs in Haiti

Fires at Beirut silos spark memory of deadly port blast

Eight children trapped after Colombia landslide buries school: officials

Former bosses of Fukushima operator ordered to pay $97 bn damages

TECH SPACE
A programming language for hardware accelerators

Advances in the design and manufacturing of novel freeform optics

France plans fashion revolution with climate-impact labels

World's first bioplastic vinyl record launched in the UK

TECH SPACE
Pacific Islands ask international court to rule on climate

'Water police' patrol drought-hit Los Angeles streets

Iraq asks Turkey to release more water; Iran arrests protesters at drying lake

Potential energy surfaces of water mapped for the first time

TECH SPACE
NASA Ice Scientists Take Flight from Greenland to Study Melting Arctic Ice

Death toll climbs to 11 in Italy glacier collapse

'In the mouth of dragons': Melting glaciers threaten Pakistan's north

Arctic temperatures are increasing four times faster than global warming

TECH SPACE
Desert-grown superfood puts 'healthy' burgers on UAE menus

Russia and Ukraine address grain crisis in first talks since March

DataFarming bringing Pixxel's hyperspectral imaging to Australian farmers

Pakistan's prized mango harvest hit by water scarcity

TECH SPACE
Germans demand change a year on from deadly floods

Flood anniversary prompts sadness and soul-searching in Germany

Heavy rains flood villages in Russia's climate-hit Far East

Pilgrimage in Indian Kashmir resumes as many missing from floods

TECH SPACE
One dead in attack on Chinese-owned firm in DR Congo

Ukraine war must not blind West to African needs: French minister

Libyan envoys meet in Congo ahead of reconciliation talks

French foreign, defence ministers in Niger as Mali pullout nears

TECH SPACE
White children are more likely to be overdiagnosed and overtreated for ADHD

Experts developing wearable technology to support women to remain active as they age

Why it is so hard for women to have a baby

Connectivity of language areas unique in the human brain









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.