Web Content Display

neuGRID Achievements

 

 

 

 

 

Harnessing its massive infrastructure, neuGRID has been running so-called analysis challenges.

 

These analysis challenges demonstrate neuGRID's readiness and scalability to offer integrated access to, and adequate processing of large volumes of brain imaging data.

 

The remainder of this page provides synthetized descriptions, as well as pointers to further detailed information.

 

 

 

     

                                                          

 

   

 

 

  

N4U - Analysis Challenge 3

 

Aim of the third analysis challenge (AC3) is to compare and characterize the behaviour of different pipelines towards the analysis of the structural and functional human brain connectomes.

 

The structural connectome consists of all the physical neuron connections within the brain. With the current imaging techniques the macro scale of the connectome can be tracked and visualized. On the other hand, the main feature of the functional connectome is the degree of metabolic synchronism among grey matter structures, rather than the specific spatial trajectories among them.

 

The pipelines that are supposed to be run for the head to head comparisons are: CMTK for the structural analyses; FCP-CPAC, UMCP and NIAK for functional analyses. The data come from Diffusion Weighted Imaging (DWI) as well as from functional Magnetic Resonance Imaging (R-fMRI) datasets, such as: 1000 Functional Connectomes Project (http://fcon_1000.projects.nitrc.org/fcpClassic/FcpTable.html), International Neuroimaging Data-sharing Initiative (INDI: http://fcon_1000.projects.nitrc.org/indi/IndiPro.html; http://fcon_1000.projects.nitrc.org/indi/IndiRetro.html),NESDA (http://www.nesda.nl/en/) and Geneva/Basel Dataset.

 

Because of the lack of harmonization among the aforementioned studies, a control for confounding factors through ad-hoc stratifications (considering acquisition protocols, scanners, and diagnoses) and multivariate analysis must be performed in order to minimize these possible unintended effects. This will ensure that any differences among the performances of the tested algorithms are not due to confounding variables and interactions. Confounders will be modelled to take into account this aspect during the statistical analysis.

 

Hereinafter is the forecast of the time needed to run the N4U-AC3:

 

DATASET

CMTK

FCP-CPAC

UMCP

Niak

INDI Dataset

≅15 days

≅2 days

≅2 days

≅2 days

1000 FCP; ADHD; ABIDE & COBRA

N/A

≅12 days

≅12 days

≅12 days

NESDA Dataset

N/A

≅2 days

≅2 days

≅2 days

GENEVA/BASEL Dataset

≅17 days

≅2 days

≅2 days

≅2 days

SUB-TOTAL

≅32 days

≅18 days

≅18 days

≅18 days

TOTAL

≅86 days

 

The expected result is the assessment of the best acting and best performing tools in the structural and functional connectome analyses. The goal is to define the best combination of structural and functional pipelines to classify the different psychiatric disorders.

Reference(s)

 

Under embargo.

 


 

 

 

 

     

                                                          

 

   

 

 

  

N4U - Analysis Challenge 2

 

The aim of AC2 is to show whether the neuGRID environment can be extended to the Multiple Sclerosis (MS) community. For this reason the neuGRID consortium has searched for collaboration with MAGNIMS (acronym for Magnetic Imaging in MS). MAGNIMS is a collaboration of the larger academic imaging centers on MS in Europe (www.magnims.eu).

 

An important task within the analysis of MS is the number and volume of MS lesions in the brain. These lesions are shown as hyper- or hypo- intense regions within the white or grey matter. Although there are a number of software packages available for lesion segmentation, the quality of the segmentation differs largely. Within AC2 various lesion segmentation algorithms will be compared. This is done by using two dataset generated by the MAGNIMS consortium which includes manual segmentations of the lesions. One Dataset consist of 53 Subjects with a 2D Dual Echo and a 3DT1 sequence. The other dataset consist of 74 Subjects with a 2D FLAIR and a 3DT1 sequence.

 

The packages that are to be compared in AC2 are: Lesion-TOADS (Shiee et al., 2010); Lesion segmentation toolbox (Schmidt et al.,  2012); CASCADE (Damangir et al., 2012); kNN-Tissue Type Prior (Steenwijk et al., 2013); HAMMER-White matter lesion (Yu et al., 2002) and the Lesion segmentation tool for 3D slicer (Scully et al., 2010). The results will be compared to the manual segmentations (gold standard) on a number of outcome measures (e.g. number of lesions, overlap between segmentations and manual outline). Although lesions can occur in white and grey matter, AC2 will concentrate on white matter only.

 

Reference(s)

 

Under embargo.

 

 

 

 

 

 

 

 

Watch it in Augmented Reality by scanning the QR code with Augment!

 

N4U - NS-NDD Analysis Challenge 1 demonstrates major improvement in hippocampal atrophy measures

 

The MAPS-HBSI algorithm - a little used hippocampal atrophy measure - has been shown to be 70% more reproducible than the standards in the field in N4U's head-to-head study of hippocampal atrophy measuring algorithms. Hippocampal atrophy provides valuable information in the study of Alzheimer's disease.

Both FreeSurfer/Reconall and manual segmentation performed similarly, as previously published. However, MAPS-HBSI - which calculates partial volumes of boundary voxels - performed far better than expected. One possible reason for the far better performance may be avoidance in the current study of statistically confounding factors such as classification of subjects into healthy controls, mildly cognitively impaired and Alzheimer's disease or statistical tests that assume Gaussian distributions.

Results of the study are available on the neuGRID4you website.

The MAPS-HBSI will be available to run on large data sets for researchers on a case by case basis. Contact neuGRID at www.neuGRID4you.eu for more information.

 

Analysis Challenge 1 Design

The purpose of the first analysis challenge in N4U is two fold. The first goal is to provide a relevant comparison of the performance of algorithms that takes advantage of the software, datasets and computation power available in N4U. The first year challenge will thus focus on the comparison of algorithms used to measure the atrophy of the hippocampus, a structure in the brain particularly affected by Alzheimer's disease.

 

The second goal, which is specific to the first Analysis Challenge, is a beta test of the new ExpressLane tool architecture being introduced into N4U. ExpressLane takes advantage of a common practice in the neuroscience community, that of implementing image processing algorithms as a shell script under Linux. ExpressLane uses a slightly modified version of the shell script, along with its executables and other files, and can run the script on 10's, 100's, 1000's or 10,000's image volumes in a trivially parallel manner. ExpressLane aims to become a simple yet extensible environment within which N4U users can build, share and compose basic building blocs into complex marker pipelines.

 

The Analysis Challenge will combine the datasets, algorithms and computational power available from N4U to assess both the accuracy and reproducibility of the commonly used algorithms in measuring brain atrophy in Alzheimer's disease from MRI scans. The following table provides estimated computation times for this analysis challenge.

 

Algorithm CPU core hours per subject Total CPU core hours assuming 800 subjects
FSL / FIRST 12 - 24 10.000 - 20.000
FreeSurfer / Hippocampus and Whole head 150 120.000
MAPS-HBSI / Hippocampus 150 120.000
AdaBoost 6 5.000
FSL / SIENA 5 4.000
FSL / VIENA 5 4.000

 

The results of AC1 for hippocampi will be released at the Human Brain Mapping Conference (OHBM2014) in Hamburg, Germany on Tuesday June 10, 2014 as poster 3582 and posted to this website shortly thereafter. 

Stay tuned and watch an example target marker in augmented reality / 3D, on the left.

 

Reference(s)

 

Under embargo.

 

 

 

 

 

 

 

 

Watch it in Augmented Reality by scanning the QR code with Augment!

LINGA - LInked Neuroscientific Grand chAllenge

The LInked Neuroscientific Grand chAllenge (LINGA) was the first ever run large-scale neuroscientific experiment involving 3 international and complementary neuroscientific infrastructures, i.e. CBRAIN in Canada, LONI in the USA and neuGRID in Europe, together with 3 European distributed computing infrastructure initiatives, i.e. outGRID, SHIWA and EGI.

 

Jointly developed by SHIWA and outGRID, the LINGA workflow processed the patient's cortical thickness through a demanding image processing pipeline using data sources hosted at and processed by the 3 participating neuroscience infrastructures. Once the analysis finished, the results were sent to EGI to be statistically compared with selected meaningful criteria, used to produce the graphs neuroscientists can then base their interpretations on.

 

The data challenge was thus completed in less than 12 days instead of 5.5 years on a single computer. It involved 1.440 CPU cores to produce 7.2 GB of scientific data in total.

 

 

CIVET @ N4U

ANM

CIVET @ N4U

US-ADNI

CIVET @ CBRAIN

ICBM

QC & Statistics @ EGI

Experiment duration in outGRID 21h30min 11 days 7h12min 2h30min
Experiment duration on a single computer 3.5 months > 5 years 1.5 months 18 hours
Number of patients 371 715 156 1.327
Number of MR brain scans 371 6.235 156 na
Total processing operations 17.066 286.810 7.176 6.232
Number of CPU cores involved 100 184 156 1.000
Number of nodes involved 5 4 1 unknown
Volume of generated data 1.3 GB 330 GB 5.5 GB 2 MB

 

Reference(s)

 

2011 - Best Live Demo Award @ EGI'11 Technical Forum, http://gridtalk-project.blogspot.com/2011/09/shiwa-linga-neuroscientific-grand.html

 

 

This work has been carried out in collaboration with and under partial funding of the SHIWA project (agreement number 261585). SHIWA is supported by a Grant from the European Commission's  FP7 INFRASTRUCTURES-2010-2 call.

 

This work has been carried out in collaboration with and under partial funding of the outGRID project (agreement number 246690). outGRID has received funding from the European Community's Seventh Framework Programme FP7/2007-2013 call.

 

 

 

 

 

 

 

Watch it in Augmented Reality by scanning the QR code with Augment!

 

neuGRID - AC/DC3 Data Challenge

 

The AC/DC3 (analysis challenge/data challenge 3) was designed as an extension of AC/DC2, which ran onto a larger dataset. Indeed, it consisted in analyzing the latest US-ADNI dataset which contained approximately 7.500 scans at the time, i.e. baseline together with 5 to 10 follow-ups per patient in DICOM format. The latter represented a population of 800 individual patients, amounting for 112 GB of imaging and clinical data. Each scan was about 10 to 20 MB and could contain from 150 to 250 DICOM slices. The data challenge consisted in analyzing the whole dataset without quality controlling nor filtering data, and using three different cortical thickness extraction pipelines, i.e. CIVET, FREESURFER and BRAINVISA. To do so, the neuGRID infrastructure was again used, but this time together with external computational resources contributed by the EGI project

 

The data challenge could thus complete in less than 3 months instead of a hundred years on a single computer, thanks to continuous utilization of 1.300 CPU cores from 6 participating nodes. The challenge produced 2.2 TB of scientific data. The following table provides more accurate figures about the challenge.

 

Experiment duration using neuGRID < 3 months
Experiment duration on a single computer > 100 years
Number of patients 800
Number of MR brain scans 7.500
Total processing operations 700.000
Number of CPU cores involved 1.300
Number of nodes involved 6
Volume of generated data 2.2 TB
Reference(s)

 

2010 - Best Live Demo Award @ EGEE'10 User Forum, http://gridtalk-project.blogspot.com/2010/04/winners-of-best-demo-and-poster.html

 

 

 

 

 

 

 

 

Watch it in Augmented Reality by scanning the QR code with Augment!

neuGRID - AC/DC2 Data Challenge

 

The AC/DC2 (analysis challenge/data challenge 2) was performed on the US-ADNI data (715 patient folders, containing in total 6'235 scans - i.e. baseline, together with 5 to 10 follow-ups per patient- in MINC file format, representing roughly 108 GB of data). Each scan was about 10 to 20 MB and could contain from 150 to 250 DICOM slices. The data challenge consisted in analyzing the whole dataset without quality controlling nor filtering data - using the CIVET pipeline from the MNI/McGILL. To do so, two out of the three neuGRID nodes, available at the time, equipped with 64-bit Worker Nodes and the 64-bit version of the CIVET pipeline were used in the production environment. The 2 nodes provided 184 processing cores, together with 5.3TB of storage capacity and were connected to the GEANT2 network, thus guarantying a good network bandwidth.

The CIVET pipeline was executed with no optimization in order to maximize the number of possible parallel job executions. Multiple CIVET instances were thus spawned in the participating neuGRID nodes. CIVET-64 took about 7 hours to process on a single scan and generated 10 times the initial data volume as output. The data challenge therefore took approximately 11 days to complete and 1 TB of scientific data was generated. The following table provides more accurate figures about the challenge.

 

Experiment duration using neuGRID 11 days
Experiment duration on a single computer > 5 years
Number of patients 715
Number of MR brain scans 6.235
Total processing operations 286.810
Number of CPU cores involved 184
Number of nodes involved 4
Volume of generated data 1 TB
Reference(s)

 

2009 - Best Live Demo Award @ EGEE'09 Conference, http://gridtalk-project.blogspot.com/2009/09/best-poster-and-best-demo-competition.html