Bigger than Big Data: The Key to Successful Translational Science
Is Big Data really the biggest challenge at the moment for translational science? Certainly there are issues with the complexity and size of omics data, which Big Data techniques can help address, but there are two more pressing challenges: enabling collaboration whilst facilitating information sharing, and the ability to better interpret multiple different omics data (multi-omics).
Research and development (R&D) organizations and hospitals are keen to break down the cultural barriers that have impeded open scientific data collaboration in the past. We’ve seen pharma R&D groups re-organizing to become more agile and move their teams closer together in both mindset and location. Data silos are becoming better integrated and, recently Big Data and open data, including pre-competitive information, has started to be shared between academic medical centers and pharma. All this is pushing the boundaries of what organizations, cross-border collaborations and national laws allow.
Happily, patients are key to help drive this, and they are more involved with the way in which translational science is evolving. Unfortunately the law, accountability and ethics are still a big bottleneck when it comes to sharing data. IT systems can help support issues on consent, data privacy and security, but collaboration agreements and contracts are still key. Even once agreements are in place, groups must ensure confidentiality and make sure that the patients have consented correctly to allow the appropriate use of research data. With flexible data management software and good data provenance, organizations can begin to overcome these challenges and make sure they are able to support research according to their collaboration agreements.
Although the data in the life sciences industry is getting bigger, it isn’t really the case that the data is too big to deal with. Bioinformatics is switching from a genome-centric approach to a more holistic understanding of the broader biological process. The space is having to process and analyze multi-omics obtained in different conditions, accumulated from high throughput technologies.
We’ve been working with Segal Cancer Centre to help change the way they run their translational studies. Their multi-omics datasets are highly complex and require a completely new and structured approach to data capture and manipulation. This enables them to make sense of their data in the right context. Multi-omics will play a key role in understanding systems medicine and will enhance the knowledge which comes from sharing data and working together. As the project expands across Quebec, we know that good data management will help with collaboration, but that it will also need support from scientists, lawyers and the government to succeed.
For now, we are waiting to see if the number-crunching approach of Big Data analysis will deliver on the promises made. In the meantime, working cleverly and collaboratively, whilst utilizing multi-omics data, is the key to successful translational science.
Robin Munro, Ph.D., is Director of Translational Sciences at IDBS. He may be reached at editor@ScientificComputing.com.