AFQ.data

Module Contents

Functions

read_callosum_templates(resample_to=False)

Load AFQ callosum templates from file

read_templates(resample_to=False)

Load AFQ templates from file

read_or_templates(resample_to=False)

Load AFQ OR templates from file

fetch_hcp(subjects, hcp_bucket='hcp-openaccess', profile_name='hcp', path=None, study='HCP_1200', aws_access_key_id=None, aws_secret_access_key=None)

Fetch HCP diffusion data and arrange it in a manner that resembles the

read_stanford_hardi_tractography()

Reads a minimal tractography from the Stanford dataset.

organize_stanford_data(path=None, clear_previous_afq=False)

If necessary, downloads the Stanford HARDI dataset into DIPY directory and

Attributes

fetch_callosum_templates

fetch_templates

fetch_or_templates

fetch_stanford_hardi_tractography

AFQ.data.fetch_callosum_templates[source]
AFQ.data.read_callosum_templates(resample_to=False)[source]

Load AFQ callosum templates from file

Returns
dict with: keys: names of template ROIs and values: nibabel Nifti1Image
objects from each of the ROI nifti files.
AFQ.data.fetch_templates[source]
AFQ.data.read_templates(resample_to=False)[source]

Load AFQ templates from file

Returns
dict with: keys: names of template ROIs and values: nibabel Nifti1Image
objects from each of the ROI nifti files.
AFQ.data.fetch_or_templates[source]
AFQ.data.read_or_templates(resample_to=False)[source]

Load AFQ OR templates from file

Returns
dict with: keys: names of template ROIs and values: nibabel Nifti1Image
objects from each of the ROI nifti files.
AFQ.data.fetch_hcp(subjects, hcp_bucket='hcp-openaccess', profile_name='hcp', path=None, study='HCP_1200', aws_access_key_id=None, aws_secret_access_key=None)[source]

Fetch HCP diffusion data and arrange it in a manner that resembles the BIDS [1] specification.

Parameters
subjectslist

Each item is an integer, identifying one of the HCP subjects

hcp_bucketstring, optional

The name of the HCP S3 bucket. Default: “hcp-openaccess”

profile_namestring, optional

The name of the AWS profile used for access. Default: “hcp”

pathstring, optional

Path to save files into. Default: ‘~/AFQ_data’

studystring, optional

Which HCP study to grab. Default: ‘HCP_1200’

aws_access_key_idstring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

aws_secret_access_keystring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

Returns
dict with remote and local names of these files,
path to BIDS derivative dataset

Notes

To use this function with its default setting, you need to have a file ‘~/.aws/credentials’, that includes a section:

[hcp] AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXX

The keys are credentials that you can get from HCP (see https://wiki.humanconnectome.org/display/PublicData/How+To+Connect+to+Connectome+Data+via+AWS) # noqa

Local filenames are changed to match our expected conventions.

1

Gorgolewski et al. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3::160044. DOI: 10.1038/sdata.2016.44.

AFQ.data.fetch_stanford_hardi_tractography[source]
AFQ.data.read_stanford_hardi_tractography()[source]

Reads a minimal tractography from the Stanford dataset.

AFQ.data.organize_stanford_data(path=None, clear_previous_afq=False)[source]

If necessary, downloads the Stanford HARDI dataset into DIPY directory and creates a BIDS compliant file-system structure in AFQ data directory:

~/AFQ_data/ └── stanford_hardi ├── dataset_description.json └── derivatives

├── freesurfer │ ├── dataset_description.json │ └── sub-01 │ └── ses-01 │ └── anat │ ├── sub-01_ses-01_T1w.nii.gz │ └── sub-01_ses-01_seg.nii.gz └── vistasoft

├── dataset_description.json └── sub-01

└── ses-01
└── dwi

├── sub-01_ses-01_dwi.bval ├── sub-01_ses-01_dwi.bvec └── sub-01_ses-01_dwi.nii.gz

If clear_previous_afq is True and there is an afq folder in derivatives, it will be removed.