IBM is making high performance computing more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer will provide industry-standard InfiniBand networking technology to connect SoftLayer bare metal servers. This will enable very high data throughput speeds between systems, allowing companies to move workloads traditionally associated with HPC to the cloud.
A case study published in The International Journal of Business Process Integration and...
The National Institute of Standards and Technology (NIST) has issued for public review and...
IBM is announcing a new software defined storage-as-a-service on IBM SoftLayer, code named...
Registration is now open for the 2014 ISC Cloud and ISC Big Data Conferences, which will be held this fall in Heidelberg, Germany. The fifth ISC Cloud Conference will take place in the Marriott Hotel from September 29 to 30, and the second ISC Big Data will be held from October 1 to 2 at the same venue.
Michael M. Resch, the Director of the Stuttgart High Performance Computing Center (HLRS) will be talking about “HPC and Simulation in the Cloud – How Academia and Industry Can Benefit.” His keynote is of special interest to cloud skeptics, given that prior to 2011, Resch himself was a vocal cloud pessimist. Three years later, he feels that this technology provides a practical option for many users.
IBM Announces $3B Research Initiative to Tackle Chip Grand Challenges for Cloud and Big Data SystemsJuly 9, 2014 4:58 pm | by IBM | News | Comments
IBM has announced it is investing $3 billion over the next five years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments are intended to push IBM's semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.
Moab HPC Suite-Enterprise Edition 8.0 (Moab 8.0) is designed to enhance Big Workflow by processing intensive simulations and big data analysis to accelerate insights. It delivers dynamic scheduling, provisioning and management of multi-step/multi-application services across HPC, cloud and big data environments. The software suite bolsters Big Workflow’s core services: unifying data center resources, optimizing the analysis process and guaranteeing services to the business.
An energy efficient supercomputer with warm water. How cool is that? Enlightenment has long been the ultimate pursuit of artists, philosophers, scientists, theologians and other sentient minds. Whether it is delivering the proof to support their theses, or to investigate a perplexing problem before them, they have poured a vast amount of energy into the situation. Energy has now become the problem.
HP has announced new innovations and sustainable enterprise infrastructure solutions designed to deliver the simplicity, efficiency and investment protection organizations need to bridge the datacenter technologies of today and tomorrow. Big data, mobility, security and cloud computing are forcing organizations to rethink their approach to technology, causing them to invest heavily in IT infrastructure.
The lack of a holistic data management environment to support virtualization has left project managers in a haze about how best to address the needs of the business. The sky is beginning to clear somewhat with recent introductions from companies such as Accelrys, Core Informatics and PerkinElmer. Those products, along with CDD, will be discussed to highlight capabilities and vendor approaches.
A complicated decision: To purchase infrastructure or run remotely in the cloud? Bandwidth and data security issues provide the easiest gating factors to evaluate, because an inability to access data kills any chance of using remote infrastructure, be it the public cloud or at a remote HPC center. If running remotely is an option, then the challenge lies in determining the return on investment (ROI) for the remote and local options ...
Atos, an international information technology services company, and Bull, a partner for enterprise data, together announced the intended public offer in cash by Atos for all the issued and outstanding shares in the capital of Bull. Atos offer is set at 4.90 euros per Bull's share in cash, representing a 22 percent premium over the Bull's closing price
Elastic Storage is capable of reducing storage costs up to 90 percent by automatically moving data onto the most economical storage device. The technology allows enterprises to exploit the exploding growth of data in a variety of forms generated by devices, sensors, business processes and social networks.
Penguin Computing, experts in high performance, enterprise and cloud computing solutions, has announced the immediate availability of MATLAB Distributed Computing Server on its HPC Cloud, POD. This solution combines POD’s ease-of-use and high performance computing capabilities in the cloud with MATLAB scale-up capability to solve more demanding and complex problems.
At EMC World 2014, EMC announced major new software-defined storage products and technologies designed to enable the blended benefits of a public and private cloud, delivering service providers and users in any industry and of any size, the efficiency, agility, security and control of a hybrid cloud.
Governor Deval Patrick announced a $3 million capital investment to launch the Massachusetts Open Cloud project, a university-industry collaboration designed to create a new public cloud computing infrastructure to spur big data innovation, on April 25, 2014. Governor Patrick also announced the release of the 2014 Mass Big Data Report, which confirms the continued growth and competitiveness of the Commonwealth’s big data industry.
This year’s ISC Cloud conference will be the fifth in the series and will continue the tradition of bringing experts and users from industry and academia to foster collaboration and innovation in the field of cloud computing. The conference will be held in Heidelberg at the Marriott Hotel from September 29 to 30, 2014, followed by ISC Big Data
Sharan Kalwani recently joined the HPC group at the Fermi National Accelerator Labs, Batavia, Illinois as a computing services architect. Before Fermi, he was the Subject Matter Expert/Project lead at the UberCloud project, working on helping to realize HPC in the cloud.
Muniyappa Manjunathaiah's research in computational science includes novel and emergent systems and architectures, parallel and distributed computing, cloud computing, mathematical modelling, scalable algorithms, middleware to support parallel and distributed applications.
Mahdi Bohlouli is a Ph.D. candidate a the University of Siegen. His research interests include Cloud & Grid Computing and Distributed Systems, Knowledge Representation and Modeling, and Big Data.
IBM inventors have patented a cloud computing invention that can improve quality of service for clients by enabling data to be dynamically modified, prioritized and shared across a cloud environment. As more and more companies take advantage of applications, processes and services delivered via the cloud, vendors are struggling with increased complexity and challenges associated with ensuring uninterrupted data availability.
Technology Academy Finland (TAF) has declared innovator Prof. Stuart Parkin as winner of the 2014 Millennium Technology Prize, the prominent award for technological innovation. Parkin receives the Prize in recognition of his discoveries, which have enabled a thousand-fold increase in the storage capacity of magnetic disk drives. Parkin’s innovations have led to a huge expansion of data acquisition and storage capacities
Today's enterprises face unique challenges. In the past, the requirement was to upgrade. Today, it's about building an integrated strategy that involves multiple technologies both existing and new. For example, there's more diversity in database technology than ever before, server technology and data center infrastructure, to name a few. At the moment, none of these technologies are replacing the others; instead, they need to be integrated.
ACM Turing Award Goes to Leslie Lamport for Work Enabling Distributed Computing in Data Center, Security, Cloud LandscapesMarch 19, 2014 9:33 am | by The Association for Computing Machinery | News | Comments
The Association for Computing Machinery has named Leslie Lamport, a Principal Researcher at Microsoft Research, as the recipient of the 2013 ACM A.M. Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages.
Businesses increasingly report that they are able to boost their productivity and competitiveness in the global market by deploying computer simulations and digital modeling. Such applications require high-end computing power and storage that are provided by HPC products and services. The ISC’14 two-day Industry Innovation through HPC track is designed to help engineers, manufacturers and designers gain the right set of tools and methods
How can organizations embrace — instead of brace for v the rapidly intensifying collision of public and private clouds, HPC environments and Big Data? The current go-to solution for many organizations is to run these technology assets in siloed, specialized environments. This approach falls short, however, typically taxing one datacenter area while others remain underutilized, functioning as little more than expensive storage space.
At Cycle Computing we’re seeing several large trends as it relates to Big Data and Analytics. We started talking about this concept of Big Compute back in Oct. 2012. In many ways, it’s the collision of where HPC is meeting the challenges of Big Data. As our technical capabilities continue to expand in the ways we can collect and store data, the problem of how we access and use data is only growing.
Steve Conway, IDC VP HPC explains that, to date, most data-intensive HPC jobs in the government, academic and industrial sectors have involved the modeling and simulation of complex physical and quasi-physical systems. However, he notes that from the start of the supercomputer era in the 1960s — and even earlier — an important subset of HPC jobs has involved analytics, attempts to uncover useful information and patterns in the data itself.
- Page 1