Neuroinformatics Without Borders, day 1

Intro

I am attending a neuro meeting at the fantastic Janelia Farm facilities to see how experts in the fields of electrophysiology and computer science among others, decide a common format to express recordings of neuronal activity and the surrounding experimental metadata.

The mandate and outline of NWB has a clear mission, timeline and particular steps:

  1. August 2014: Project Start.
  2. Phase 1: Identify use cases and evaluation criteria.
  3. Phase 2: Select/assemble most promising approaches and develop data format and test it.
  4. Phase 3: Test and fine-tune it.
  5. July 2015: Project ends.

This post would not have been possible without the collaborative editing and many tweets that happened during the event.

E-phys formats

Now, brace for impact. Here’s a small list of common e-phys file formats that were created by different labs:

KWIK, NIX, MEF, Svoboda lab, LBNL BRAIN, StorageBIT, ARF, NSDF, WFDB, epHDF, MTSF, NeXus, NDF, Brainliner, NeuroHDF, NEO, Neuroshare, Ovation, Neuralynx, EEGBase

For a more nuanced view of some of the main data formats, please have a look at the considerations for developing a standard for storing e-phys data in HDF5 and the NWB data and file formats summary.

Wouldn’t it be a massive win to choose a single data format and not fall in the traditional academic mantra that states: “different formats are good for different things”? Or even worse, create yet another competing standard?

How many of those formats are actually used in research publications? Which is the one seeing the most adoption in academic literature so far?

Why shouldn’t we just choose the top N that share the most mindshare for the greater good (reproducibility, data sharing, interoperability)?

Is HDF5 really the best format to adopt given the difficulties encountered with modern parallel processing frameworks such as Spark or the neuro-oriented Thunder?

Let’s see if we can find a fix for this e-phys Babel.

The talkathon

Several labs describe and present their custom ephys formats. Most of them have a fairly large overlap on attributes, structure and features. With varying specifications, the labs seem to revolve around HDF5, a hierarchical file format that stores all the attributes of the experiments, from images to timeseries, in varying degrees of complexity.

It is interesting to see how, being an event-processing problem at its core, there are very few mentions of industry and opensource event processing frameworks:

With the exception of Jeremy Freeman who demoes a very interesting combination of Spark, Thunder and Lightning, but warns us that the community buy-in into HDF5 complicates things:

Software developers in the room recommend exposing a strongly typed API that deals with the raw data attributes via an intermediate representation instead of having to change the HDF5 container at every specification change or experimental novelties. This idea resonates quite well with the NEO format approach. An additional problem that arises with internal representations is keeping track of provenance since encapsulation might hide processing details that might be interesting to follow an experiment step by step.

Personal conclusions

It seems to me that e-phys recording can be approached as a large scale logging problem, therefore:

  1. Using a framework that aggregates events at scale is crucial to guarantee a smooth and fast data analysis experience. That is, including slicing by data recording sessions or any other criteria that the data (neuro)scientist decides.
  2. Leaving the internal (intermediate) representation of data in point 1 untouched is the most convenient approach. Specially when HDF5 does not play well with modern parallel frameworks.
  3. Exporting data from point 1 as HDF5 for sharing, given that is the most popular container within this science niche seems reasonable (to me, at least).
  4. Writing importers/exporters (serializers) from Thunder to HDF5 seems like an interesting Hackathon challenge. Adopting KWIK, already used by many, as a particular specification could be interesting w.r.t interoperability.

Comments? Suggestions?