For the first time, the ISC Cloud Conference is offering attendees an Amazon Web Services tutorial on launching high performance computing clusters in the AWS Cloud. The presenter, Dougal Ballantyne, is a HPC solutions architect at Amazon Web Services. This workshop is free-of-charge for attendees and will provide an introduction to cfncluster, a framework for launching HPC clusters on AWS.
On Tuesday, September 23, Scientific Computing will host a live panel discussion that...
On September 20, early-bird pricing for the ISC Cloud and ISC Big Data registrations will be...
HP ProLiant Generation 9 (Gen9) Servers are designed to help users reduce cost and complexity, accelerate IT service delivery and enable business growth. They provide a vast pool of processing resources that can be located anywhere, scaled to any workload and available at all times. The servers are optimized for convergence, cloud and software-defined environments.
The National Science Foundation has announced two $10 million projects to create cloud computing testbeds—to be called "Chameleon" and "CloudLab"—that will enable the academic research community to develop and experiment with novel cloud architectures and pursue new, architecturally-enabled applications of cloud computing.
Cloud computing is not only the latest revolution in the Information and Communication Technology (ICT) world, but a key enhancer of innovation and economic development. Within the framework of the project CLOUDS, Madrid-based researchers have made crucial scientific advances in the state-of-the-art of cloud computing.
Leica Microsystems is offering a custom slide scanning and image hosting service for teachers, which makes it possible to share images within the classroom and to expand learning outside the classroom. Glass slides sent to a scanning center are processed to create high-resolution digital image files, which then can be accessed online from a hosted server via any standard Internet browser for study by students anytime, anywhere.
MIT spinout Akselos has developed novel software, based on years of research at the Institute, which uses precalculated supercomputer data for structural components — like simulated “Legos” — to solve FEA models in seconds. Hundreds of engineers in the mining, power-generation, and oil and gas industries are now using Akselos software.
The Michael J. Fox Foundation for Parkinson's Research (MJFF) and Intel have announced a collaboration aimed at improving research and treatment for Parkinson's disease — a neurodegenerative brain disease second only to Alzheimer's in worldwide prevalence. The collaboration includes a multiphase research study using a new big data analytics platform that detects patterns in participant data collected from wearable technologies.
Scientists from IBM have unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW.
As our lives and businesses become ever more intertwined with the Internet and networked technologies, it is crucial to continue to develop and improve cybersecurity measures to keep our data, devices and critical systems safe, secure, private and accessible. The NSF's Secure and Trustworthy Cyberspace program has announced two new center-scale "Frontier" awards to support projects that address grand challenges in cybersecurity science
The PrecisionMDx (PMDx) System is designed for molecular laboratories. It supports molecular diagnostic requirements of entrepreneurial molecular test development companies and the institutional services of molecular pathology departments in community, private and academic medical centers.
Scientists from AT&T, IBM and ACS announced a proof-of-concept technology that reduces set up times for cloud-to-cloud connectivity from days to seconds. This advance is a major step forward that could one day lead to sub-second provisioning time with IP and next-generation optical networking equipment and enables elastic bandwidth between clouds at high connection request rates using intelligent cloud data center orchestrators.
Enabling Innovation and Discovery through Data-Intensive High Performance Cloud and Big Data InfrastructureJuly 29, 2014 2:34 pm | by George Vacek, DataDirect Networks | Blogs | Comments
As the size and scale of life sciences datasets increases — think large-cohort longitudinal studies with multiple samples and multiple protocols — so does the challenge of storing, interpreting and analyzing this data. Researchers and data scientists are under increasing pressure to identify the most relevant and critical information within massive and messy data sets, so they can quickly make the next discovery.
In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It's how Facebook and Google mine your Web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.
IBM is making high performance computing more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer will provide industry-standard InfiniBand networking technology to connect SoftLayer bare metal servers. This will enable very high data throughput speeds between systems, allowing companies to move workloads traditionally associated with HPC to the cloud.
A case study published in The International Journal of Business Process Integration and Management demonstrates that the adoption of integrated cloud-computing solutions can lead to significant cost savings for businesses, as well as large reductions in the size of an organization's carbon footprint.
The National Institute of Standards and Technology (NIST) has issued for public review and comment a draft report summarizing 65 challenges that cloud computing poses to forensics investigators who uncover, gather, examine and interpret digital evidence to help solve crimes.
IBM is announcing a new software defined storage-as-a-service on IBM SoftLayer, code named Elastic Storage on Cloud, that gives organizations access to a fully-supported, ready-to-run storage environment, which includes SoftLayer bare metal resources and high performance data management and allows organizations to move data between their on-premise infrastructure and the cloud.
Registration is now open for the 2014 ISC Cloud and ISC Big Data Conferences, which will be held this fall in Heidelberg, Germany. The fifth ISC Cloud Conference will take place in the Marriott Hotel from September 29 to 30, and the second ISC Big Data will be held from October 1 to 2 at the same venue.
Michael M. Resch, the Director of the Stuttgart High Performance Computing Center (HLRS) will be talking about “HPC and Simulation in the Cloud – How Academia and Industry Can Benefit.” His keynote is of special interest to cloud skeptics, given that prior to 2011, Resch himself was a vocal cloud pessimist. Three years later, he feels that this technology provides a practical option for many users.
IBM Announces $3B Research Initiative to Tackle Chip Grand Challenges for Cloud and Big Data SystemsJuly 9, 2014 4:58 pm | by IBM | News | Comments
IBM has announced it is investing $3 billion over the next five years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments are intended to push IBM's semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.
Moab HPC Suite-Enterprise Edition 8.0 (Moab 8.0) is designed to enhance Big Workflow by processing intensive simulations and big data analysis to accelerate insights. It delivers dynamic scheduling, provisioning and management of multi-step/multi-application services across HPC, cloud and big data environments. The software suite bolsters Big Workflow’s core services: unifying data center resources, optimizing the analysis process and guaranteeing services to the business.
An energy efficient supercomputer with warm water. How cool is that? Enlightenment has long been the ultimate pursuit of artists, philosophers, scientists, theologians and other sentient minds. Whether it is delivering the proof to support their theses, or to investigate a perplexing problem before them, they have poured a vast amount of energy into the situation. Energy has now become the problem.
HP has announced new innovations and sustainable enterprise infrastructure solutions designed to deliver the simplicity, efficiency and investment protection organizations need to bridge the datacenter technologies of today and tomorrow. Big data, mobility, security and cloud computing are forcing organizations to rethink their approach to technology, causing them to invest heavily in IT infrastructure.
The lack of a holistic data management environment to support virtualization has left project managers in a haze about how best to address the needs of the business. The sky is beginning to clear somewhat with recent introductions from companies such as Accelrys, Core Informatics and PerkinElmer. Those products, along with CDD, will be discussed to highlight capabilities and vendor approaches.
A complicated decision: To purchase infrastructure or run remotely in the cloud? Bandwidth and data security issues provide the easiest gating factors to evaluate, because an inability to access data kills any chance of using remote infrastructure, be it the public cloud or at a remote HPC center. If running remotely is an option, then the challenge lies in determining the return on investment (ROI) for the remote and local options ...
Atos, an international information technology services company, and Bull, a partner for enterprise data, together announced the intended public offer in cash by Atos for all the issued and outstanding shares in the capital of Bull. Atos offer is set at 4.90 euros per Bull's share in cash, representing a 22 percent premium over the Bull's closing price
- Page 1