Here at Nonlinear, we are very pleased to have been given a treat of a new product to sell, Symphony. It is available now.
I asked the product manager, Dr Rob Tonge of Waters, a few questions about its inception, what it does and why people are so enthusiastic about it.
Here is the interview:
“How did Symphony come about?”
Initially we were working with The Phenome Centre in London on large scale population studies and gained insight into the problems that scientists experience when performing high throughput metabolomics.
“What problems were these?”
Research groups are increasingly trying to perform larger and larger experiments, in areas such as Personalised or Precision Medicine. Big Data is the flavour of the day. The field is being led by genomics and next generation sequencing technologies, but proteins and metabolites are also of interest to provide a more holistic picture of the biology under study.
The scale of experiments is moving from 10’s, to 100’s, to 1000’s of samples as we push towards population-scale investigations. However, many of the methods developed for omics have been built with a research scale in mind, many with very manual processes, and these can be prone to many errors when used at a larger scale. Thus, for larger experiments, automated informatics workflows are essential for efficiency, accuracy, and sensitivity.
“Do all these groups want the same solution?”
No. When we talk to customers in the research environment, many of them have very varied needs, often from one project to the next. One size really does not fit all and labs want to use the latest cutting edge methods and algorithms, and the potential to future-proof their operations.
“So the challenge was to create something flexible enough to allow for a variety of workflows.”
Exactly. Researchers want to experiment with ideas and require informatics systems that enable creativity, not constrain it.
“Presumably if you are talking about automation, you are talking about time saving, as well as reduced errors”
Absolutely. In today’s world, time certainly IS money. People have more and more to do in less and less time and do not have time to waste on repetitive tasks when automated protocols could greatly accelerate their work.
“Were there any other factors that you took into account when designing Symphony?”
Yes, we now live in highly connected communities, where social media platforms such as Twitter, Facebook and LinkedIn bring like-minded people together. There are great benefits to be had from working together to share ideas, share applications and code, and be catalytic on each other’s thinking.
“OK, I understand the background to the product now; tell me more about the Symphony solution”
Symphony is a client/server application that allows automation of tasks. It is a framework into which different tasks can be plugged, so, as in the example below, we have built a data processing pipeline with 4 tasks (blue, yellow, green and red) and we are processing our blue incoming data into the resultant green, transformed data at the bottom.
The first version of Symphony is initiated by MassLynx at sample acquisition, and typical tasks that can be applied include moving a file to a server, de-noising, compressing, renaming, making a copy, running a series of executables, etc., etc.
Yes, Symphony is built with flexibility, creativity and efficiency in mind. It accepts a wide range of tasks and it is very easy to construct a pipeline sequence by dragging and dropping task icons together, and we have the facility to run conditional tasks that are able to collect data from one task and use it in another.
Tasks and Pipelines can be saved into a library for future use and pipelines can be configured to work across multiple PCs and across networks. Symphony has an excellent trouble-shooting system to allow a user to diagnose pipeline configurations and comes with a Home Page that allows us to send information to a user such as news items, information about latest builds, and items from the Symphony Community.
“Where do you see this product being used?”
I’ve just returned from ASMS, at which we launched Symphony. It was well received by the community there. The main benefit to all users is that Symphony saves hands on time in data processing. That can be research labs and also higher throughput labs like DMPK CROs. Data processing can be initiated as soon as the file is recorded by the instrument and can be done automatically to save time and allow out of hours working.
As well as efficiency, automation also brings the additional benefit of a reduced chance of errors that are very possible when performing repetitive tasks. And what a lot of customers require today, Symphony allows the implementation of Personalised Data Processing – that is, the data processing that THEY need in THEIR laboratories.
“That really does sound great! Let’s finish with some feedback from three of our users”
“By automating routine data-processing steps, Symphony saves our operator time, and allows us to conduct the most time-consuming parts of the informatics workflow in parallel to acquisition. Best case, it can save MONTHS of processing time, and in combination with noise-reduction, petabytes of storage. We see great value in the modular nature of Symphony, allowing us to rapidly develop and test new processes for handling experimental data, including real-time QC, prospective fault detection, and tools for ensuring data-integrity.”
Jake Pearce, Informatics Manager, National Phenome Centre, London, UK.
“Symphony offers a solution to address many challenges, providing a platform with automated, flexible and adaptable workflows for high-throughput handling of proteomic data. Just the simple step of being able to seamlessly and automatically copy raw files to a remote file location whilst a column is conditioning, maximises the time we can use the instrument for analysis. Previously, the instrument could be idle for 1-2 hours whilst data is copied to a filestore in preparation for processing. With three Synapts generating data 24/7 in our laboratory, this alone is a major advance.
Symphony’s flexibility of being able to execute sample specific workflows directly from the MassLynx sample list will have a major impact on our productivity. The scalable client-server architecture makes Symphony perfect for large scale high-throughput MS data processing, where the processing of highly complex data can only be addressed by calling on a range of computational resources.”
Paul Skipp, Lecturer in Proteomics and Centre Director, Centre for Proteomics Research, University of Southampton, UK.
"New approaches are continuously being developed to extract increasing amounts of data from very data-rich ion mobility-assisted HDMSE experiments. Plugging new algorithms into an automated Symphony pipeline provides the ingredients for exponential growth in information content that can be extracted from both new and archived samples. Automation brings the possibilities of finding optimal parameter settings and reducing the possibility of errors, without significant time penalties. I was amazed at the level of detail that I can see using these approaches!"
Maarten Dhaenens, Laboratory for Pharmaceutical Biotechnology, University of Gent
Want to learn more?
Contact us if you would like to try Symphony.