Analyze a collection in memory#
Here, we’ll analyze the growing collection by loading it into memory. This is only possible if it’s not too large. If your data is large, you’ll likely want to iterate over the collection to train a model, the topic of the next page ().
import lamindb as ln
import bionty as bt
import anndata as ad
💡 connected lamindb: testuser1/test-scrna
ln.transform.stem_uid = "mfWKm8OtAzp8"
ln.transform.version = "1"
ln.track()
💡 notebook imports: anndata==0.9.2 bionty==0.42.4 lamindb==0.69.2 scanpy==1.10.0
💡 saved: Transform(uid='mfWKm8OtAzp85zKv', name='Analyze a collection in memory', key='scrna4', version='1', type=notebook, updated_at=2024-03-28 12:11:01 UTC, created_by_id=1)
💡 saved: Run(uid='Alb1UOyeBMYTXrEPWsHZ', transform_id=4, created_by_id=1)
ln.Collection.df()
uid | name | description | version | hash | reference | reference_type | transform_id | run_id | artifact_id | visibility | created_at | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
id | ||||||||||||||
2 | k8AG1SK3O5uwnd13RYSP | My versioned scRNA-seq collection | None | 2 | HNR3VFV60_yqRnUka11E | None | None | 2 | 2 | NaN | 1 | 2024-03-28 12:10:49.167012+00:00 | 2024-03-28 12:10:49.167054+00:00 | 1 |
1 | k8AG1SK3O5uwnd13r7oC | My versioned scRNA-seq collection | None | 1 | 9sXda5E7BYiVoDOQkTC0KB | None | None | 1 | 1 | 1.0 | 1 | 2024-03-28 12:10:20.810283+00:00 | 2024-03-28 12:10:20.810302+00:00 | 1 |
collection = ln.Collection.filter(
name="My versioned scRNA-seq collection", version="2"
).one()
collection.artifacts.df()
uid | storage_id | key | suffix | accessor | description | version | size | hash | hash_type | n_objects | n_observations | transform_id | run_id | visibility | key_is_virtual | created_at | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
id | |||||||||||||||||||
2 | hcUbEXbalc3FN8L7TAhB | 1 | None | .h5ad | AnnData | 10x reference adata | None | 857752 | 0Fozmib89XWbFoD6hSq5yA | md5 | None | 70 | 2 | 2 | 1 | True | 2024-03-28 12:10:45.983910+00:00 | 2024-03-28 12:10:46.603000+00:00 | 1 |
1 | k8AG1SK3O5uwnd13r7oC | 1 | None | .h5ad | AnnData | Human immune cells from Conde22 | None | 57612943 | 9sXda5E7BYiVoDOQkTC0KB | sha1-fl | None | 1648 | 1 | 1 | 1 | True | 2024-03-28 12:10:16.205147+00:00 | 2024-03-28 12:10:20.806452+00:00 | 1 |
If the collection isn’t too large, we can now load it into memory.
Under-the-hood, the AnnData
objects are concatenated during loading.
The amount of time this takes depends on a variety of factors.
If it occurs often, one might consider storing a concatenated version of the collection, rather than the individual pieces.
adata = collection.load()
The default is an outer join during concatenation as in pandas:
adata
AnnData object with n_obs × n_vars = 1718 × 36508
obs: 'cell_type', 'n_genes', 'percent_mito', 'louvain', 'donor', 'tissue', 'assay', 'artifact_uid'
obsm: 'X_pca', 'X_umap'
The AnnData
has the reference to the individual artifacts in the .obs
annotations:
adata.obs.artifact_uid.cat.categories
Index(['hcUbEXbalc3FN8L7TAhB', 'k8AG1SK3O5uwnd13r7oC'], dtype='object')
We can easily obtain ensemble IDs for gene symbols using the look up object:
genes = bt.Gene.lookup(field="symbol")
genes.itm2b.ensembl_gene_id
'ENSG00000136156'
Let us create a plot:
import scanpy as sc
sc.pp.pca(adata, n_comps=2)
2024-03-28 12:11:04,433:INFO - Failed to extract font properties from /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: In FT2Font: Can not load face (unknown file format; error code 0x2)
2024-03-28 12:11:04,691:INFO - generated new fontManager
sc.pl.pca(
adata,
color=genes.itm2b.ensembl_gene_id,
title=(
f"{genes.itm2b.symbol} / {genes.itm2b.ensembl_gene_id} /"
f" {genes.itm2b.description}"
),
save="_itm2b",
)
WARNING: saving figure to file figures/pca_itm2b.pdf
We could save a plot as a pdf and then see it in the flow diagram:
artifact = ln.Artifact("./figures/pca_itm2b.pdf", description="My result on ITM2B")
artifact.save()
artifact.view_lineage()
Show code cell output
But given the image is part of the notebook, we can also rely on the report that we create when saving the notebook via the command line via:
lamin save <notebook_path>
To see the current notebook, visit: lamin.ai/laminlabs/lamindata/record/core/Transform?uid=mfWKm8OtAzp8z8