How can organizations embrace — instead of brace for v the rapidly intensifying collision of public and private clouds, HPC environments and Big Data? The current go-to solution for many organizations is to run these technology assets in siloed, specialized environments. This approach falls short, however, typically taxing one datacenter area while others remain underutilized, functioning as little more than expensive storage space.
At Cycle Computing we’re seeing several large trends as it relates to Big Data and Analytics. We...
Scalable Productivity and the Ever-Increasing Tie between Data Analytics, Data Management and ComputationMarch 7, 2014 3:52 pm | by Barry Bolding | Blogs | Comments
Cray continues to see an increasing trend in the HPC marketplace that we are calling “data-...
From the start of the supercomputer era in the 1960s — and even earlier —an important subset of HPC jobs has involved analytics — attempts to uncover useful information and patterns in the data itself. Cryptography, one of the original scientific-technical computing applications, falls predominantly into this category.
Steve Conway, IDC VP HPC explains that, to date, most data-intensive HPC jobs in the government, academic and industrial sectors have involved the modeling and simulation of complex physical and quasi-physical systems. However, he notes that from the start of the supercomputer era in the 1960s — and even earlier — an important subset of HPC jobs has involved analytics, attempts to uncover useful information and patterns in the data itself.
Rackform iServ R456 is a server equipped with Intel Xeon E7-4800v2 processors, formerly named Ivy Bridge-EX. The server features four Intel Xeon E7-4800v2 processors and up to 96 DDR3 DIMMs, triple the memory of previous products based on Intel Xeon E7-4800 processors.
With cutting-edge technology, sometimes the first step scientists face is just making sure it actually works as intended. The University of Southern California (USC) Viterbi School of Engineering is home to the USC-Lockheed Martin Quantum Computing Center, a super-cooled, magnetically shielded facility specially built to house the first commercially available quantum computing processors
Intelligent Storage Bridge (ISB) is designed to enhance the throughput and reliability of large data transfers, thereby increasing fast scratch efficiency and overall application workflow performance. Used in HPC environments, it includes vendor-agnostic support of Lustre solutions, allowing organizations to bring together a wider range of HPC and enterprise storage solutions.
The UK Science and Technology Facilities Council and Rogue Wave Software have signed a collaboration agreement to work together on software tools to increase significantly the productivity of software development for scientific computing. As part of this new agreement, they will collaborate to develop next-generation HPC software tools to enhance the software development capabilities of the newest supercomputers.
Mellanox Technologies a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, announced a collaboration with the University of Cambridge for the Square Kilometer Array (SKA) project. The University of Cambridge selected the company’s Virtual Protocol Interconnect (VPI) solution to provide it with interconnect performance and protocol flexibility for SKA test-bed clusters. The University of Cambridge and Mellanox will use the compute clusters for various development projects for the SKA project, an international effort to build the world’s largest radio telescope.
The NAG Library for Python gives users of the Python language access to over 1,700 mathematical and statistical routines in the NAG Library. It has been enhanced in-line with Python2.7 and Python3, as well as an improved pythonic interface and a new python egg installer.
Big Data tools such as Grok and IBM Watson are enabling large organizations to behave more like agile startups. Of the transformative technology developments that have ushered in the current frenzy of activity along the information superhighway, the 1994 invention of the “Wiki” by Ward Cunningham is among the most disruptive.
Encryption and nuclear weapons are two easily recognized examples where a combinatorial explosion is a sought after characteristic. In the software development world, combinatorial explosions are bad. In particular, it is far too easy to become lost in the minutia of writing code that can run efficiently on NVIDIA GPUs, AMD GPUs, x86, ARM and Intel Xeon Phi while also addressing the numerous compiler and user interface vagaries
The 10-day tour of Europe was not your typical itinerary — Garching, Karlsruhe, Villigen, Hamburg and Oxford. In January. But David Brown and Craig Tull of the Computational Research Division and Alex Hexemer of the Advanced Light Source weren’t touring to see the sights — they more interested in seeing the lights — powerful scientific instruments known as light sources that use intense X-rays to study materials
The Department of Energy’s National Energy Research Scientific Computing Center (NERSC) announced the winners of its second annual High Performance Computing (HPC) Achievement Awards on February 4, 2014, during the annual NERSC User Group meeting at Lawrence Berkeley National Laboratory (Berkeley Lab).
Just as Netflix uses an algorithm to recommend movies we ought to see, a Stanford software system offers by-the-moment advice to thousands of server-farm computers on how to efficiently share the workload. We hear a lot about the future of computing in the cloud, but not much about the efficiency of the data centers that make the cloud possible, where clusters work together to host applications ranging from big data analytics
Lawrence Livermore has joined forces with two other national labs to deliver next generation supercomputers able to perform up to 200 peak petaflops (quadrillions of floating point operations per second), about 10 times faster than today's most powerful high performance computing (HPC) systems.
Researchers at IBM have set a new record for data transmission over a multimode optical fiber, a type of cable that is typically used to connect nearby computers within a single building or on a campus. The achievement demonstrated that the standard, existing technology for sending data over short distances should be able to meet the growing needs of servers, data centers and supercomputers through the end of this decade
The Russian Ministry of Education and Science has awarded a $3.4 million “mega-grant” to Alexei Klimentov, Physics Applications Software Group Leader at the U.S. Department of Energy’s Brookhaven National Laboratory, to develop new “big data” computing tools for the advancement of science.
How do you build a universal quantum computer? Turns out, this question was addressed by theoretical physicists about 15 years ago. The answer was laid out in a research paper and has become known as the DiVincenzo criteria. The prescription is pretty clear at a glance; yet in practice the physical implementation of a full-scale universal quantum computer remains an extraordinary challenge.
AT&T and IBM have announced a new global alliance agreement to develop solutions that help support the "Internet of Things." The companies will combine their analytic platforms, cloud and security technologies with privacy in mind to gain more insights on data collected from machines in a variety of industries.
Although the time and cost of sequencing an entire human genome has plummeted, analyzing the resulting three billion base pairs of genetic information from a single genome can take many months. However, a team working with Beagle, one of the world's fastest supercomputers devoted to life sciences, reports that genome analysis can be radically accelerated. This computer is able to analyze 240 full genomes in about two days.
Multi-scale Simulation Software for Chemistry Research Developed Using Trestles and Gordon SupercomputersFebruary 19, 2014 6:48 pm | by San Diego Supercomputer Center | News | Comments
Researchers at the San Diego Supercomputer Center at the University of California, San Diego, have developed software that greatly expands the types of multi-scale QM%2FMM (mixed quantum and molecular mechanical) simulations of complex chemical systems that scientists can use to design new drugs, better chemicals, or improved enzymes for biofuels production.
The Intel Xeon processor E7 v2 family delivers capabilities to process and analyze large, diverse amounts of data to unlock information that was previously inaccessible. The processor family has triple the memory capacity of the previous generation processor family, allowing much faster and thorough data analysis.
Black holes may be dark, but the areas around them definitely are not. These dense, spinning behemoths twist up gas and matter just outside their event horizon, and generate heat and energy that gets radiated, in part, as light. And when black holes merge, they produce a bright intergalactic burst that may act as a beacon for their collision.
HPC matters, now more than ever. What better way to show how it matters than through your submission to the SC14 Technical Program? Technical Program submissions opened, February 14th for Research Papers, Posters (Regular, Education, and ACM Student Research Competition), Panels, Tutorials, BOF Sessions, Scientific Visualization and Data Analytics Showcase, Emerging Technologies, and Doctoral Showcase.
Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision. For more than two years, they have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm.
Seeking a solution to decoherence — the “noise” that prevents quantum processors from functioning properly — scientists have developed a strategy of linking quantum bits together into voting blocks, a strategy that significantly boosts their accuracy. The team found that their method results in at least a five-fold increase in the probability of reaching the correct answer when the processor solves the largest problems
- Page 1