AFQ

Package Contents

Classes

AFQ

BundleDict

Functions

read_callosum_templates(resample_to=False)

Load AFQ callosum templates from file

read_or_templates(resample_to=False)

Load AFQ OR templates from file

read_templates(resample_to=False)

Load AFQ templates from file

fetch_hcp(subjects, hcp_bucket='hcp-openaccess', profile_name='hcp', path=None, study='HCP_1200', aws_access_key_id=None, aws_secret_access_key=None)

Fetch HCP diffusion data and arrange it in a manner that resembles the

read_stanford_hardi_tractography()

Reads a minimal tractography from the Stanford dataset.

organize_stanford_data(path=None, clear_previous_afq=False)

If necessary, downloads the Stanford HARDI dataset into DIPY directory and

Attributes

fetch_callosum_templates

fetch_or_templates

fetch_templates

fetch_stanford_hardi_tractography

_ga_id

AFQ.fetch_callosum_templates[source]
AFQ.read_callosum_templates(resample_to=False)[source]

Load AFQ callosum templates from file

Returns
dict with: keys: names of template ROIs and values: nibabel Nifti1Image
objects from each of the ROI nifti files.
AFQ.fetch_or_templates[source]
AFQ.read_or_templates(resample_to=False)[source]

Load AFQ OR templates from file

Returns
dict with: keys: names of template ROIs and values: nibabel Nifti1Image
objects from each of the ROI nifti files.
AFQ.fetch_templates[source]
AFQ.read_templates(resample_to=False)[source]

Load AFQ templates from file

Returns
dict with: keys: names of template ROIs and values: nibabel Nifti1Image
objects from each of the ROI nifti files.
AFQ.fetch_hcp(subjects, hcp_bucket='hcp-openaccess', profile_name='hcp', path=None, study='HCP_1200', aws_access_key_id=None, aws_secret_access_key=None)[source]

Fetch HCP diffusion data and arrange it in a manner that resembles the BIDS [1] specification.

Parameters
subjectslist

Each item is an integer, identifying one of the HCP subjects

hcp_bucketstring, optional

The name of the HCP S3 bucket. Default: “hcp-openaccess”

profile_namestring, optional

The name of the AWS profile used for access. Default: “hcp”

pathstring, optional

Path to save files into. Default: ‘~/AFQ_data’

studystring, optional

Which HCP study to grab. Default: ‘HCP_1200’

aws_access_key_idstring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

aws_secret_access_keystring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

Returns
dict with remote and local names of these files,
path to BIDS derivative dataset

Notes

To use this function with its default setting, you need to have a file ‘~/.aws/credentials’, that includes a section:

[hcp] AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXX

The keys are credentials that you can get from HCP (see https://wiki.humanconnectome.org/display/PublicData/How+To+Connect+to+Connectome+Data+via+AWS) # noqa

Local filenames are changed to match our expected conventions.

1

Gorgolewski et al. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3::160044. DOI: 10.1038/sdata.2016.44.

AFQ.fetch_stanford_hardi_tractography[source]
AFQ.read_stanford_hardi_tractography()[source]

Reads a minimal tractography from the Stanford dataset.

AFQ.organize_stanford_data(path=None, clear_previous_afq=False)[source]

If necessary, downloads the Stanford HARDI dataset into DIPY directory and creates a BIDS compliant file-system structure in AFQ data directory:

~/AFQ_data/ └── stanford_hardi ├── dataset_description.json └── derivatives

├── freesurfer │ ├── dataset_description.json │ └── sub-01 │ └── ses-01 │ └── anat │ ├── sub-01_ses-01_T1w.nii.gz │ └── sub-01_ses-01_seg.nii.gz └── vistasoft

├── dataset_description.json └── sub-01

└── ses-01
└── dwi

├── sub-01_ses-01_dwi.bval ├── sub-01_ses-01_dwi.bvec └── sub-01_ses-01_dwi.nii.gz

If clear_previous_afq is True and there is an afq folder in derivatives, it will be removed.

class AFQ.AFQ(bids_path, bids_filters={'suffix': 'dwi'}, preproc_pipeline='all', participant_labels=None, output_dir=None, custom_tractography_bids_filters=None, b0_threshold=50, robust_tensor_fitting=False, min_bval=None, max_bval=None, reg_template='mni_T1', reg_subject='power_map', brain_mask=B0Mask(), mapping=SynMap(), profile_weights='gauss', bundle_info=None, parallel_params={'engine': 'serial'}, scalars=['dti_fa', 'dti_md'], virtual_frame_buffer=False, viz_backend='plotly_no_gif', tracking_params=None, segmentation_params=None, clean_params=None, **kwargs)[source]

Bases: object

_get_best_scalar(self)
get_reg_template(self)
__getattribute__(self, attr)

Return getattr(self, name).

combine_profiles(self)
get_streamlines_json(self)
export_all(self, viz=True, afqbrowser=True, xforms=True, indiv=True)

Exports all the possible outputs

Parameters
vizbool

Whether to output visualizations. This includes tract profile plots, a figure containing all bundles, and, if using the AFQ segmentation algorithm, individual bundle figures. Default: True

afqbrowserbool

Whether to output an AFQ-Browser from this AFQ instance. Default: True

xformsbool

Whether to output the reg_template image in subject space and, depending on if it is possible based on the mapping used, to output the b0 in template space. Default: True

indivbool

Whether to output individual bundles in their own files, in addition to the one file containing all bundles. If using the AFQ segmentation algorithm, individual ROIs are also output. Default: True

upload_to_s3(self, s3fs, remote_path)

Upload entire AFQ derivatives folder to S3

assemble_AFQ_browser(self, output_path=None, metadata=None, page_title='AFQ Browser', page_subtitle='', page_title_link='', page_subtitle_link='')

Assembles an instance of the AFQ-Browser from this AFQ instance. First, we generate the combined tract profile if it is not already generated. This includes running the full AFQ pipeline if it has not already run. The combined tract profile is one of the outputs of export_all. Second, we generate a streamlines.json file from the bundle recognized in the first subject’s first session. Third, we call AFQ-Browser’s assemble to assemble an AFQ-Browser instance in output_path.

Parameters
output_pathstr

Path to location to create this instance of the browser in. Called “target” in AFQ Browser API. If None, bids_path/derivatives/afq_browser is used. Default: None

metadatastr

Path to subject metadata csv file. If None, an metadata file containing only subject ID is created. This file requires a “subjectID” column to work. Default: None

page_titlestr

Page title. If None, prompt is sent to command line. Default: “AFQ Browser”

page_subtitlestr

Page subtitle. If None, prompt is sent to command line. Default: “”

page_title_linkstr

Title hyperlink (including http(s)://). If None, prompt is sent to command line. Default: “”

page_subtitle_linkstr

Subtitle hyperlink (including http(s)://). If None, prompt is sent to command line. Default: “”

class AFQ.BundleDict(bundle_info=BUNDLES, seg_algo='afq', resample_to=None)[source]

Bases: collections.abc.MutableMapping

gen_all(self)
__setitem__(self, key, item)
__getitem__(self, key)
__len__(self)
__delitem__(self, key)
__iter__(self)
copy(self)
resample_all_roi(self)
AFQ._ga_id = UA-156363454-3[source]