Technion’s Rising Star

Technion Assistant Professor Daniel Soudry joins Intel’s Rising Star Faculty award program

Intel’s Rising Star Faculty Award program selected 10 university faculty members who show great promise in developing future computing technologies. From projects such as a novel cloud system stack, to ultra-low power computing and memory platforms, to artificially intelligent (AI) systems that learn on the fly, these researchers are building advanced technologies today.

The program promotes the careers of faculty members who are early in their academic research careers and who show great promise as future academic leaders in disruptive computing technologies. The program also fosters long-term collaborative relationships with senior technical leaders at Intel.

The awards were given based on progressive research in computer science, engineering and social science in support of the global digital transition in the following areas: software, security, interconnect, memory, architecture, and process.

Faculty members who work at the following universities received Rising Star awards: Cornell University, Georgia Tech, Stanford University, Technion, University of California at San Diego, University of Illinois at Urbana-Champaign, University of Michigan, University of Pennsylvania, University of Texas at Austin, and University of Washington.

Ten assistant professors received Intel’s Rising Star Faculty Awards: (from top row, left): Asif Khan of Georgia Tech, Chelsea Finn of Stanford University, Hannaneh Hajishirzi of University of Washington, Baris Kasikci of University of Michigan, Daniel Soudry of Technion, Nadia Polikarpova of UC San Diego, Jaydeep Kulkarni of UT Austin, Bo Li of UI Urbana-Champaign, Hamed Hassani of University of Pennsylvania, and Christina Delimitrou of Cornell University.

Assistant Professor of Electrical Engineering Daniel Soudry’s contributions address the core challenge of making deep learning more efficient in terms of computational resources. Despite the impressive progress made using artificial neural nets, they are still far behind the capabilities of biological neural nets in most areas — even the simplest fly is far more resourceful than the most advanced robots. Soudry’s novel approach relies on accurate models with low numerical precision. Decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower precision math. The benefits are a reduction in the memory required to store the neural network, reduction in chip area, and a drastic improvement in energy efficiency.

Filed under: News, Technion Israel