\[\newcommand{\ud}{\mathop{}\negthinspace\mathrm{d}} \newcommand{\pfrac}[2][x]{\frac{\partial #2}{\partial #1}}\]

Other functions and classes for constructing the core of Matilda

You could review these functions roughly for understanding Matilda. Generally, the functions and classes here are not used for directly or seperately called in the scripts, as they are the bases of the core functions of Matilda, which could be reviewed in the Hyperparameter Tuning.

In the scripts ‘model_rna’

class model_rna.CiteAutoencoder(nfeatures_rna=0, hidden_rna=185, z_dim=20, classify_dim=17)

Bases: Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class model_rna.Decoder(nfeatures_modality1=10703, hidden_modality1=185, z_dim=128)

Bases: Module

Decoder for for 2 modalities data (citeseq data and shareseq data)

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class model_rna.Encoder(nfeatures_modality1=10703, hidden_modality1=185, z_dim=128)

Bases: Module

Encoder for CITE-seq data

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reparameterize(mu, logvar)
training: bool
class model_rna.LinBnDrop(n_in, n_out, bn=True, p=0.0, act=None, lin_first=True)

Bases: Sequential

Module grouping BatchNorm1d, Dropout and Linear layers

training: bool

In the scripts ‘model’

class model.CiteAutoencoder_CITEseq(nfeatures_rna=0, nfeatures_adt=0, hidden_rna=185, hidden_adt=15, z_dim=20, classify_dim=17)

Bases: Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class model.CiteAutoencoder_SHAREseq(nfeatures_rna=0, nfeatures_atac=0, hidden_rna=185, hidden_atac=15, z_dim=20, classify_dim=17)

Bases: Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class model.CiteAutoencoder_TEAseq(nfeatures_rna=10000, nfeatures_adt=30, nfeatures_atac=10000, hidden_rna=185, hidden_adt=30, hidden_atac=185, z_dim=100, classify_dim=17)

Bases: Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class model.Decoder(nfeatures_modality1=10703, nfeatures_modality2=192, hidden_modality1=185, hidden_modality2=15, z_dim=128)

Bases: Module

Decoder for for 2 modalities data (citeseq data and shareseq data)

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class model.Decoder_TEAseq(nfeatures_rna=10703, nfeatures_adt=192, nfeatures_atac=10000, hidden_rna=185, hidden_adt=30, hidden_atac=185, z_dim=100)

Bases: Module

Decoder for TEA-seq data

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class model.Encoder(nfeatures_modality1=10703, nfeatures_modality2=192, hidden_modality1=185, hidden_modality2=15, z_dim=128)

Bases: Module

Encoder for CITE-seq data

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reparameterize(mu, logvar)
training: bool
class model.Encoder_TEAseq(nfeatures_rna=10703, nfeatures_adt=192, nfeatures_atac=192, hidden_rna=185, hidden_adt=30, hidden_atac=185, z_dim=128)

Bases: Module

Encoder for TEA-seq data

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reparameterize(mu, logvar)
training: bool
class model.LinBnDrop(n_in, n_out, bn=True, p=0.0, act=None, lin_first=True)

Bases: Sequential

Module grouping BatchNorm1d, Dropout and Linear layers

training: bool

In the scripts ‘train’

train.train_model(model, train_dl, test_dl, lr, epochs, classify_dim=17, best_top1_acc=0, save_path='', feature_num=10000)

In the scripts ‘predict’

predict.test_model(model, dl, real_label, classify_dim=17, save_path='')

In the scripts ‘util’

class util.AverageMeter(name, fmt=':f')

Bases: object

Computes and stores the average and current value

reset()
update(val, n=1)
class util.CrossEntropyLabelSmooth(num_classes=17, epsilon=0.1)

Bases: Module

forward(inputs, targets)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
util.KL_loss(mu, logvar)
class util.MyDataset(data, label)

Bases: Dataset

class util.ToTensor

Bases: object

util.accuracy(output, target, topk=(1,))

Computes the accuracy over the k top predictions for the specified values of k

util.compute_log2(data)
util.compute_zscore(data)
util.get_decodings(model, dl)
util.get_encodings(model, dl)
util.get_simulated_data(model, dl)
util.get_simulated_data_random_generation(model, dl)
util.get_vae_simulated_data_from_sampling(model, dl)
util.read_fs_label(label_path)
util.read_h5_data(data_path)
util.real_label(label_path, classify_dim)
util.save_checkpoint(state, save)
util.setup_seed(seed)