Bio-engineers are working on the development of biological computers with the aim of designing small circuits made from biological material that can be integrated into cells to change their functions. In the future, such developments could enable cancer cells to be reprogrammed, thereby preventing them from dividing at an uncontrollable rate. Stem cells could likewise be reprogrammed into differentiated organ cells.
New software algorithms have been shown to significantly reduce the time and...
From performing surgery to driving cars, today’s robots can do it all. With chatbots recently...
The Oil and Gas High Performance Computing (HPC) Workshop, hosted annually at Rice University,...
High Performance Parallelism Pearls, the latest book by James Reinders and Jim Jeffers, is a teaching juggernaut that packs the experience of 69 authors into 28 chapters designed to get readers running on the Intel Xeon Phi family of coprocessors, plus provide tools and techniques to adapt legacy codes, as well as increase application performance on Intel Xeon processors.
Error-correcting codes are one of the glories of the information age: They’re what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence of the corrupting influences that engineers call “noise.”
MIT researchers have developed an algorithm for bounding that they’ve successfully implemented in a robotic cheetah — a sleek, four-legged assemblage of gears, batteries and electric motors that weighs about as much as its feline counterpart. The team recently took the robot for a test run, where it bounded across the grass at a steady clip. The researchers estimate the robot may eventually reach speeds of up to 30 mph.
As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community.
Trust is good, control is better. This also applies to the security of computer programs. Instead of trusting “identification documents” in the form of certificates, the JOANA software analysis tool examines the source text (code) of a program. In this way, it detects leaks, via which secret information may get out or strangers may enter the system from outside. At the same time, JOANA reduces the number of false alarms to a minimum.
An Android app has been created that allows users to get together to crack a modern cryptographic code. All encryption types, among which we can find the widely used RSA, can theoretically be broken. If so, how to ensure that our data remains protected? The answer lies in the time and effort required to break the code.
NVIDIA CUDA 6.5 brings GPU-accelerated computing to 64-bit ARM platforms. The toolkit provides programmers with a platform to develop advanced scientific, engineering, mobile and HPC applications on GPU-accelerated ARM and x86 CPU-based systems. Features include support for Microsoft Visual Studio 2013, cuFFT callbacks capability and improved debugging for CUDA FORTRAN applications.
High-profile security breaches, data thefts and cyberattacks are increasing in frequency, ferocity and stealth. They result in significant loss of revenue and reputation for organizations, destabilize governments, and hit everyone’s wallets. Cybersecurity is in the global spotlight and, now more than ever, organizations must understand how to identify weaknesses and protect company infrastructure from incursions.
New research by University of Montana doctoral student Jared Oyler provides improved computer models for estimating temperature across mountainous landscapes. Oyler provided a new climate dataset for ecological and hydrological research and natural resource management.
Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies like Toy Story or Cars, but also in industry. Special computing methods should ensure this, but require great effort. Computer scientists from Saarbrücken have developed a novel approach that vastly simplifies and speeds up the whole calculating process.
The first thousand-robot flash mob has assembled at Harvard University. Instead of one highly-complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors. Called Kilobots, these extremely simple robots are each just a few centimeters across and stand on three pin-like legs.
With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?
LabVIEW 2014 system design software standardizes the way users interact with hardware through reuse of the same code and engineering processes across systems, which scales applications for the future. This saves time and money as technology advances, requirements evolve and time-to-market pressure increases.
A computer algorithm being developed by Brown University researchers enables users to instantly change the weather, time of day, season, or other features in outdoor photos with simple text commands. Machine learning and a clever database make it possible.
As our lives and businesses become ever more intertwined with the Internet and networked technologies, it is crucial to continue to develop and improve cybersecurity measures to keep our data, devices and critical systems safe, secure, private and accessible. The NSF's Secure and Trustworthy Cyberspace program has announced two new center-scale "Frontier" awards to support projects that address grand challenges in cybersecurity science
What if computer screens had glasses instead of the people staring at the monitors? That concept is not too far afield from technology being developed by UC Berkeley computer and vision scientists. The researchers are developing computer algorithms to compensate for an individual’s visual impairment, and creating vision-correcting displays that enable users to see text and images clearly without wearing eyeglasses or contact lenses.
The AMD Opteron A1100-Series developer kit features AMD's first 64-bit ARM-based processor, codenamed "Seattle." The processor supports 4 and 8 ARM Cortex-A57 cores; up to 4 MB of shared L2 and 8 MB of shared L3 cache; configurable dual DDR3 or DDR4 memory channels with ECC at up to 1866...
Ensemble forecasting is a key part of weather forecasting. Computers typically run multiple simulations using slightly different initial conditions or assumptions, and then analyze them together to try to improve forecasts. Using Japan’s K computer, researchers have succeeded in running 10,240 parallel simulations of global weather, the largest number ever performed, using data assimilation to reduce the range of uncertainties.
HPC-X Scalable Software Toolkit is a comprehensive software suite for high-performance computing environments that provides enhancements to significantly increase the scalability and performance of message communications in the network. The toolkit provides complete communication libraries to support MPI, SHMEM and PGAS programming languages, as well as performance accelerators that take advantage of Mellanox scalable interconnect solutions.
Mathematical equations can make Internet communication via computer, mobile phone or satellite many times faster and more secure than today. Results with software developed by researchers from Aalborg University in collaboration with the Massachusetts Institute of Technology (MIT) and California Institute of Technology (Caltech) are attracting attention in the international technology media.
EPCC is delighted to be part of a team that has won an. Presented at the International Supercomputing Conference (ISC14) in Leipzig (22-26 June 2014), the awards recognize outstanding application of HPC Computing for Business and Scientific Achievements.
Integration between Moab HPC Suite and Bright Cluster Manager provides enhanced functionality that enables users to dynamically provision HPC clusters based on both resource and workload monitoring. Combined capabilities also create a more optimal solution to managing technical computing and Big Workflow requirements.
Altair has announced that the National Supercomputing Center for Energy and the Environment (NSCEE) at the University of Nevada, Las Vegas, (UNLV) has chosen PBS Professional to replace its previous high-performance computing (HPC) workload management implementation.
In today’s digitally driven world, access to information appears limitless. But when you have something specific in mind that you don’t know, like the name of that niche kitchen tool you saw at a friend’s house, it can be surprisingly hard to sift through the volume of information online and know how to search for it. Or, the opposite problem can occur — we can look up anything on the Internet, but how can we be sure we're finding every...
Computer systems today can be found in nearly all areas of life, from smartphones to smart cars to self-organized production facilities. These systems supply rapidly growing data volumes, and computer science now faces the challenge of processing these huge amounts of data (big data) in a reasonable and secure manner.
- Page 1