Researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell, which mirrors the brain’s ability to simultaneously process and store multiple strands of information. The development brings them closer to imitating key electronic aspects of the human brain — a vital step toward creating a bionic brain and unlocking treatments for Alzheimer’s and Parkinson’s diseases.
In case you missed it, here's another chance to catch this week's biggest hits. Writing like a...
AMD provided details the company’s multi-year strategy to drive profitable growth based on...
Self-driving cars. Computers that detect tumors. Real-time speech translation. Just a few years...
In the last 10 years, computer security researchers have shown that malicious hackers don’t need to see your data in order to steal your data. From the pattern in which your computer accesses its memory banks, adversaries can infer a shocking amount about what’s stored there.
Intel has announced that the U.S. Department of Energy’s (DOE) Argonne Leadership Computing Facility (ALCF) has awarded Intel Federal LLC, a wholly-owned subsidiary of Intel Corporation, a contract to deliver two next-generation supercomputers to Argonne National Laboratory.
Total has chosen SGI to upgrade its supercomputer Pangea. Total is one of the largest integrated oil and gas companies in the world, with activities in more than 130 countries. Its 100,000 employees put their expertise to work in every part of the industry — the exploration and production of oil and natural gas, refining, chemicals, marketing and new energies. This updated system would place in the top 10 of the latest TOP500 list.
NVIDIA has announced that its Pascal GPU architecture, set to debut next year, will accelerate deep learning applications 10X beyond the speed of its current-generation Maxwell processors. NVIDIA CEO and co-founder Jen-Hsun Huang revealed details of Pascal and the company’s updated processor roadmap in front of a crowd of 4,000 during his keynote address at the GPU Technology Conference, in Silicon Valley.
The Penguin Tundra cluster platform is based on Open Compute Project rack-level infrastructure, and is designed to deliver the highest density and lowest total-cost-of-ownership for high performance technical computing clusters. The product line includes compute sled, storage sled and an Intel Xeon Phi processor-based motherboard.
Ansys has announced that engineers using ANSYS 16.0 in combination with Intel Xeon technology can realize a 300 percent decrease in solution time. The ANSYS and Intel partnership ensures that simulation engineers performing structural analysis can expect seamless high-performance computing (HPC) operations with multi-core Xeon E5 v3 processors and many-core Xeon Phi coprocessors.
The HPC and enterprise communities are experiencing a paradigm shift as FLOPs per watt, rather than FLOPs (floating-point operations per second), are becoming the guiding metric in procurements, system design, and now application development. In short, “performance at any cost” is no longer viable, as the operational costs of supercomputer clusters are now on par with the acquisition cost of the hardware itself.
The Intel Xeon Processor D-1500 High Density Server Family is a new class of low-power, high density server solutions optimized for Embedded and hyperscale workloads in data center and cloud environments. The servers are available in a growing line of single processor (UP) motherboards, 1U and Mini-Tower server for Embedded, Network Communication/Security applications and coming high density 6U 56-node MicroBlade microserver for hyperscale environments.
A relentless global effort to shrink transistors has made computers continually faster, cheaper and smaller over the last 40 years. This effort has enabled chipmakers to double the number of transistors on a chip roughly every 18 months — a trend referred to as Moore's Law. In the process, the U.S. semiconductor industry has become one of the nation's largest export industries, valued at more than $65 billion a year.
The drive toward exascale computing, renewed emphasis on data-centric processing, energy efficiency concerns, and limitations of memory and I/O performance are all working to reshape HPC platforms, according to Intersect360 Research’s Top Six Predictions for HPC in 2015. The report cites many-core accelerators, flash storage, 3-D memory, integrated networking, and optical interconnects as just some of the technologies propelling future...
Children don’t have to be told that “cat” and “cats” are variants of the same word — they pick it up just by listening. To a computer, though, they’re as different as, well, cats and dogs. Yet it’s computers that are assumed to be superior in detecting patterns and rules, not four-year-olds. Researchers are trying to, if not to solve that puzzle definitively, at least provide the tools to do so.
We computational chemists are an impatient lot. Despite the fact that we routinely deal with highly complicated chemical processes running on our laboratory’s equally complex HPC clusters, we want answers in minutes or hours, not days, months or even years. In many instances, that’s just not feasible; in fact, there are times when the magnitude of the problem simply exceeds the capabilities of the HPC resources available to us.
GPU-accelerated applications have become ubiquitous in scientific supercomputing. Now, we are seeing increased adoption of GPU technology in other computationally demanding disciplines, including deep learning, one of the fastest growing areas in the machine learning and data science fields
Computer chips’ clocks have stopped getting faster. To keep delivering performance improvements, chipmakers are instead giving chips more processing units, or cores, which can execute computations in parallel. But the ways in which a chip carves up computations can make a big difference to performance.
Optimization for high-performance and energy efficiency is a necessary next step after verifying that an application works correctly. In the HPC world, profiling means collecting data from hundreds to potentially many thousands of compute nodes over the length of a run. In other words, profiling is a big-data task, but one where the rewards can be significant — including potentially saving megawatts of power or reducing the time to solution
Researchers have created the first transistors made of silicene, the world’s thinnest silicon material. Their research holds the promise of building dramatically faster, smaller and more efficient computer chips.
IBM and SUNY Polytechnic Institute (SUNY Poly) have announced that more than 220 engineers and scientists who lead IBM's advanced chip research and development efforts at SUNY Poly's Albany Nanotech campus will become part of IBM Research, the technology industry's largest and most influential research organization.
Every undergraduate computer-science major takes a course on data structures, which describes different ways of organizing data in a computer’s memory. Every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.
Researchers have developed a programming language making the massive costs associated with designing hardware more manageable. Chip manufacturers have been using the same chip design techniques for 20 years. The current process calls for extensive testing after each design step. The newly developed, functional programming language makes it possible to prove, in advance, that a design transformation is 100-percent error-free.
In Silicon Valley, it's never too early to become an entrepreneur. Just ask 13-year-old Shubham Banerjee. The California eighth-grader has launched a company to develop low-cost machines to print Braille, the tactile writing system for the visually impaired. Tech giant Intel recently invested in his startup, Braigo Labs.
Rackform iServ R4420 and R4422 high-density servers are designed to deliver cost-effective, energy-efficient compute power in a small footprint. The 2U 4-node products provide high throughput and processing capabilities based on Supermicro TwinPro architecture.
Intel has announced a number of technology advancements and initiatives aimed at accelerating computing into the next dimension. The announcements include the Intel Curie module, a button-sized hardware product for wearable solutions; new applications for Intel RealSense cameras spanning robots, flying multi-copter drones and 3-D immersive experiences; and a broad, new Diversity in Technology initiative.
Sense of urgency and economic impact emphasized: The “hardware first” ethic is changing. Hardware retains the glamour, but there is now the stark realization that the newest parallel supercomputers will not realize their full potential without reengineering the software code to efficiently divide computational problems among the thousands of processors that comprise next-generation many-core computing platforms.
The new L-CSC supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new system reached first place on the "Green500" list published on November 20, 2014, comparing the energy efficiency of the fastest supercomputers around the world. With a computing power of 5.27 gigaflops per watt, the L-CSC has also set a new world record for energy efficiency.
The PowerEdge C4130 is an accelerator-optimized, GPU-dense, HPC-focused rack server purpose-built to accelerate the most demanding HPC workloads. It is the only Intel Xeon E5-2600v3 1U server to offer up to four GPUs/accelerators and can achieve over 7.2 Teraflops on a single 1U server, with a performance/watt ratio of up to 4.17 Gigaflops per watt.
- Page 1