The Arctica 4806xp open network switch is based on the Broadcom StrataXGS Trident II chipset. It is the first 10/40 Gigabit Ethernet Top-of-Rack (ToR) open switch using an x86 control processor, which provides a flexible platform for Software Defined Networking and customer defined applications.
Big Data, it seems, is everywhere, usually characterized as a Big Problem. But researchers at...
In my 15 or so years leading the charge for Ethernet into higher speeds “high performance...
It is the central means of communication of our times: Electronic mailing. Worldwide, short...
Scientists from AT&T, IBM and ACS announced a proof-of-concept technology that reduces set up times for cloud-to-cloud connectivity from days to seconds. This advance is a major step forward that could one day lead to sub-second provisioning time with IP and next-generation optical networking equipment and enables elastic bandwidth between clouds at high connection request rates using intelligent cloud data center orchestrators.
HPC-X Scalable Software Toolkit is a comprehensive software suite for high-performance computing environments that provides enhancements to significantly increase the scalability and performance of message communications in the network. The toolkit provides complete communication libraries to support MPI, SHMEM and PGAS programming languages, as well as performance accelerators that take advantage of Mellanox scalable interconnect solutions.
IBM is making high performance computing more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer will provide industry-standard InfiniBand networking technology to connect SoftLayer bare metal servers. This will enable very high data throughput speeds between systems, allowing companies to move workloads traditionally associated with HPC to the cloud.
Big Web sites usually maintain their own “data centers,” banks of tens or even hundreds of thousands of servers, all passing data back and forth to field users’ requests. Like any big, decentralized network, data centers are prone to congestion: Packets of data arriving at the same router at the same time are put in a queue, and if the queues get too long, packets can be delayed.
Mathematical equations can make Internet communication via computer, mobile phone or satellite many times faster and more secure than today. Results with software developed by researchers from Aalborg University in collaboration with the Massachusetts Institute of Technology (MIT) and California Institute of Technology (Caltech) are attracting attention in the international technology media.
Internet access is becoming increasingly mobile, and the next billion users will experience the Internet in new ways from those already online. The experience of Internet connectivity is far from uniform, and observing the variety of connectivity, and how it is changing over time is important. Smartphone users around the globe can download an app and contribute their measurements to a global picture of Internet diversity and evolution.
In quantum mechanics, interactions between particles can give rise to entanglement, which is a strange type of connection that could never be described by a non-quantum, classical theory. These connections, called quantum correlations, are present in entangled systems even if the objects are not physically linked. Entanglement is at the heart of what distinguishes purely quantum systems from classical ones — why they are potentially useful.
For the past 21 years, the TOP500.org has been ranking supercomputers by their performance on the LINPACK Benchmark. Reported two times a year, the release of the list is anticipated by the industry. As with any such ranking, the top of the list often garners the most attention. However, such emphasis on the top of such a list, would limit one’s understanding of the different supercomputers in the TOP500...
Washington State University has developed a wireless network on a computer chip that could reduce energy consumption at huge data farms by as much as 20 percent.
In the coming decades, we will likely commute to work and explore the countryside in autonomous, or driverless, cars capable of communicating with the roads they are traveling on. A convergence of technological innovations in embedded sensors, computer vision, artificial intelligence, control and automation, and computer processing power is making this feat a reality.
Internet regulation in the United States is potentially facing a major change. FCC Internet Neutrality rules — also referred to as Net Neutrality rules — currently apply, but thanks to pressure from Internet Service Providers (ISP), legislators and recent court rulings, that might change. You have undoubtedly heard the term Net Neutrality before, but may be at a loss regarding what it means or what its implications are.
If future generations were to live and work on the moon or on a distant asteroid, they would probably want a broadband connection to communicate with home bases back on Earth. They may even want to watch their favorite Earth-based TV show. That may now be possible thanks to a team of researchers from MIT's Lincoln Laboratory who, working with NASA last fall, demonstrated for the first time that a data communication technology exists ...
Global research and education networks make up a critical circulatory system that supports the HPC community — connecting researchers in all domains to their collaborators, their experiments, their data and their computing resources, regardless of geographic location.
The HPC Advisory Council, together with University of São Paulo, will hold the HPC Advisory Council Brazil Conference 2014 on May 26, 2014, in São Paulo, Brazil. The conference will focus on High-Performance Computing (HPC) usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. The conference is open to the public and will bring together system managers, researchers, developers, computational scientists and industry affiliates.
Data transfers from the Large Hadron Collider at CERN in Switzerland to sites in the U.S. have historically taken different paths — 15 in all — via 10 gigabit per second (Gbps) links separately managed by three research networks in the U.S. and Europe. So, what would happen if those massive datasets were instead transferred using a single 100 Gbps connection?
Governor Deval Patrick announced a $3 million capital investment to launch the Massachusetts Open Cloud project, a university-industry collaboration designed to create a new public cloud computing infrastructure to spur big data innovation, on April 25, 2014. Governor Patrick also announced the release of the 2014 Mass Big Data Report, which confirms the continued growth and competitiveness of the Commonwealth’s big data industry.
The nation's top telecoms regulator is proposing to allow a pay-for-priority fast lane on the Internet for movies, music and other services to get to people's homes. The proposed rules come after a federal appeals court struck down previous "net neutrality" rules designed to prevent Internet access providers such as Comcast from discriminating against certain traffic flowing to their customers.
Chris Catherasoo has broad expertise in state-of-the-art supercomputers, high-end visualization systems, high-performance storage and networking, including hardware, software, scheduling and operations. Technical expertise in numerical methods and algorithm development, and in software design and development, including programming (Fortran and C), code testing and validation, configuration control and documentation.
The last time the IEEE 802.3 Working Group addressed the “Next Rate” of Ethernet was when 10 GbE was Ethernet’s fastest rate. That effort resulted in the development of two new rates — 40 GbE and 100 GbE. The justification for two rates was that 40 GbE was intended to provide the upgrade path for servers, while 100 GbE would target network aggregation applications.
Gilad Shainer serves as the vice president of marketing at Mellanox Technologies. Mr. Shainer joined Mellanox in 2001 as a design engineer and later managed several hardware and software product developments. Mr. Shainer later served in senior marketing management roles between July 2005 and February 2012.
The European Parliament adopted a "net neutrality" bill on April 3, 2014, barring Internet service providers from giving preference to some kinds of traffic on their networks — such as streaming video — when it profits them. The legislation responded to fears that providers would allot the lion's share of their bandwidth to people and companies willing to pay extra, slowing the Internet for others.
The HPC Advisory Council, an organization for high-performance computing research, outreach and education, has announced that the HPC Advisory Council, in conjunction with ISC’14, will host the 5th Annual HPC Advisory Council European Workshop 2014 in the Congress Center Leipzig on June 22, 2014. The workshop will focus on HPC productivity, and advanced HPC topics and futures
The HPC Advisory Council will hold the 2014 European Conference on June 22, 2014, in conjunction with the ISC’14 conference in Leipzig, Germany. The workshop will focus on HPC productivity, and advanced HPC topics and futures, and will bring together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High-Performance Computing.
Mellanox Technologies a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, announced a collaboration with the University of Cambridge for the Square Kilometer Array (SKA) project. The University of Cambridge selected the company’s Virtual Protocol Interconnect (VPI) solution to provide it with interconnect performance and protocol flexibility for SKA test-bed clusters. The University of Cambridge and Mellanox will use the compute clusters for various development projects for the SKA project, an international effort to build the world’s largest radio telescope.
CloudX is a reference architecture for building efficient cloud platforms. It is based on the OpenCloud architecture, which leverages off-the-shelf components of servers, storage, interconnect and software to form flexible and cost-effective public, private and hybrid clouds.
- Page 1