Discover and connect with journalists and influencers around the world, save time on email research, monitor the news, and more.
Recent: |
|
Past: |
|
Chief scientist Bill Dally explains the 4 ingredients that brought Nvidia so far → Read More
IEEE Spectrum's semiconductor expert Samuel K. Moore and the race to put fiber optics inside computers → Read More
Object recognition neural networks are only as good as the data they’re trained on. And that data is heavy on images from high-income countries in Europe and North America. So, when confronted with everyday items from lower-income countries, they get it right as litte as 20 percent of the time. → Read More
Ann B. Kelleher explains what's new 75 years after the transistor's invention → Read More
The past, present, and future of the modern world’s most important invention → Read More
In 75 years, it’s become tiny, mighty, ubiquitous, and just plain weird → Read More
Nvidia H100 and Intel Sapphire Rapids Xeon debut on ML Perf training benchmarks → Read More
The most advanced processors today are no longer a single piece of silicon. Instead they are multiple “chiplets” bound together by advanced packaging techniques that do their best to make it seem as if everything really is one big chip. But this startup's new interconnect tech might change the game. → Read More
A new consortium of fabs and suppliers wants the semiconductor industry to cut its carbon footprint → Read More
Newest MLPerf inferencing results include tests of new chips, slimmed down neural networks, and more → Read More
According to the best measures we’ve got, a set of benchmarks called MLPerf, machine learning systems can be trained nearly twice as quickly they could last year. It’s a figure that outstrips Moore’s Law by quite a ways. → Read More
According to the best measures we’ve got, a set of benchmarks called MLPerf, machine learning systems can be trained nearly twice as quickly they could last year. It’s a figure that outstrips Moore’s Law by quite a ways. → Read More
It took a major redesign for cheap flexible chips to reach their promise → Read More
How many dominos could fall if this centerpiece CPU’s weakness pans out? → Read More
Transformers, the type of neural network behind OpenAI's GPT-3 and other big natural language processors, are quickly becoming some of the most important in industry—and likely to spread to other areas of AI. Nvidia's new Hopper H100 is proof that the AI accelerator maker is a believer. → Read More
Last month, two companies said they have reached the next stage in shrinking the pixels on CMOS camera chips. Both Santa Clara-based Omnivision and South Korea's Samsung claimed pixels with a pitch of just 0.56 micrometers. → Read More
Last month, two companies said they have reached the next stage in shrinking the pixels on CMOS camera chips. Both Santa Clara-based Omnivision and South Korea's Samsung claimed pixels with a pitch of just 0.56 micrometers. → Read More
UK-based AI computer company Graphcore made a significant boost to its computers' performance without changing much of anything about its specialized AI processor cores. The secret is TSMC's wafer-on-wafer 3D integration technology. → Read More
UK-based AI computer company Graphcore made a significant boost to its computers' performance without changing much of anything about its specialized AI processor cores. The secret is TSMC's wafer-on-wafer 3D integration technology. → Read More
On Monday, Intel unveiled new details of the processor that will power the Aurora supercomputer, which is designed to become the one of the first U.S.-based machine to pierce the exaflop barrier—a billion billion high-precision floating point calculating per second. → Read More