Introduction
Bioinformatics has this intimidating reputation — as if the moment you step into the field, someone will hand you a 128-GB RAM workstation, three GPUs, and a server room keycard. Many students imagine rows of blinking machines, noisy cooling fans, and a price tag that makes your wallet cry.
And here’s the secret that seasoned bioinformaticians know but beginners rarely hear:
You can learn 90% of practical bioinformatics on a regular laptop — even a basic one. You don’t need HPC. You don’t need expensive GPUs. You don’t need a fancy institution behind you.
Bioinformatics becomes magical when you realize you can enter the field without barriers.
Let’s start building that magic together.
π±The Mindset — You Don’t Need a Supercomputer to Start
Bioinformatics starts in your brain, not in a cluster.
What you really need:
• curiosity
• consistency
• the courage to break things
• the joy of debugging (painful but character-building)
Beginners often assume only “big machines” can run bioinformatics.
Intermediates think HPC is mandatory.
Experts smile, because they know… most of bioinformatics is smart shortcuts, not brute force.
This whole blog is about those shortcuts.
Tools You Can Use Completely Free (Beginner-to-Expert Friendly)
This is where the fun actually begins. You don’t need a supercomputer humming in your bedroom. You can build a powerful, real, production-level bioinformatics workflow using platforms that cost exactly nothing.
Let’s walk through each one — what it does, why it matters, and how you can use it to practice like a pro.
1️⃣ Google Colab — Free Cloud Workstation
Google Colab is basically a free entry ticket to a mini cloud HPC.
It lets you run Python, R, and machine learning pipelines right from your browser.
The best part: no installation, no setup, no dependency nightmares.
You can use it for:
• RNA-seq tutorials
Work with small count matrices, perform differential expression, create PCA plots — all in Python or R.
• Machine learning classification tasks
Train models on gene expression, variant data, peptide sequences, microbiome features.
• Biological data visualization
Plots, heatmaps, volcano plots, UMAP embeddings — everything renders beautifully.
• FASTA / VCF parsing
Using Biopython or PyVCF, you can explore sequences, variants, metadata, allele frequencies.
• scRNA-seq toy datasets
You can practice clustering, normalization, and marker gene identification on mini datasets.
Google basically lends you a CPU, RAM, and sometimes even a GPU for free.
It’s the perfect platform for students who want to experiment fast.
2️⃣ Galaxy Platform — Bioinformatics Without Coding
If someone put “bioinformatics” into a washing machine and removed all the fear of command lines, the result is Galaxy.
Galaxy gives you:
• ready-made workflows
• drag-and-drop pipelines
• visual QC reports
• cloud-backed computation
• free HPC resources
• reproducible workflows
You can run complete experiments like:
• Quality control (FastQC, MultiQC)
Check read quality, adapter content, GC bias.
• Trimming (Trimmomatic, Cutadapt)
Remove low-quality bases and adapters.
• Alignment (HISAT2, BWA, Bowtie2)
Map reads to genomes without installing anything.
• Variant calling (GATK, FreeBayes)
Real VCF pipelines, fully graphical.
• RNA-seq quantification (featureCounts, Salmon, DESeq2)
From raw FASTQs to a full differential expression analysis.
• Metagenomics workflows (Kraken2, MetaPhlAn)
Identify microbial species with visual outputs.
Galaxy is literally a free, friendly HPC cluster in your browser.
Professionals use it. Universities use it. You can use it at home.
3️⃣ NCBI & EBI Tools —
These are not “student tools.”
These are the same tools used by researchers, companies, and published studies. And they’re open to you.
• BLAST
Compare sequences, identify genes, annotate proteins, predict homologs.
• Primer-BLAST
Design PCR primers with extreme accuracy.
• SRA Toolkit
Download raw FASTQ files from the Sequence Read Archive.
• Clustal Omega
Perform high-quality multiple sequence alignment.
• EMBOSS tools
Hundreds of programs for sequence analysis, motifs, restriction sites, and more.
Whatever stage you’re at, these tools will always be in your belt.
Even experts rely on them daily.
4️⃣ RStudio Cloud & Posit Cloud —
R is the backbone of genomics, statistics, and visualization.
But installing it locally can be a nightmare — packages failing, versions mismatching, dependencies crying.
RStudio Cloud (Posit Cloud) solves that.
You can:
• write R code from the browser
• use tidyverse smoothly
• run DESeq2 without installation issues
• practice edgeR and limma pipelines
• generate plots (PCA, heatmaps, volcano plots)
• access real datasets for analysis
It feels like a private coding lab — one click away.
5️⃣ Free Local Tools — Your Personal Mini-Bioinformatics Lab
Even the simplest laptops can run the core command-line tools used in actual genomics pipelines.
These are lightweight, efficient, and foundational.
• FastQC
Check the health of sequencing reads.
• MultiQC
Merge QC reports from all samples into one clean view.
• IGV (Integrative Genomics Viewer)
A powerful genome browser to visualize alignments, variants, peaks, coverage, etc.
• samtools
Manipulate BAM/SAM files, view alignments, filter reads.
• bcftools
Work with VCF files — sorting, filtering, indexing, querying.
• bedtools
Perform genomic interval operations like overlaps, intersections, coverage.
• Biopython
Parse sequences, files, motifs, alignments, FASTA/FASTQ — a gentle entry into bioinformatics scripting.
• Bioconductor packages
DESeq2, edgeR, limma, scater, Seurat… the absolute essentials of genomics.
These tools don’t need an HPC.
They were built to run efficiently on normal machines.
Bioinformatics isn’t locked behind expensive hardware.
These tools prove that skill, curiosity, and a willingness to explore matter far more than compute power.
Where to Get Free Datasets
The world of bioinformatics is filled with open-access data. You don’t need a lab, a grant, or an HPC to explore real biological questions. With the right datasets, your laptop becomes a research laboratory.
Here’s your guided tour through the richest data sources in the world — what they offer, why they matter, and how you can practice with them.
π§ 1. NCBI SRA (Sequence Read Archive)
Best for: raw sequencing data (FASTQ), practicing pipelines from scratch
SRA is the library of raw biological reads. If science had a “Netflix of FASTQs,” this would be it.
You can find:
• RNA-seq
• Whole genome sequencing (WGS)
• Microbiome/metagenomic reads
• ChIP-seq (protein–DNA interactions)
• ATAC-seq (chromatin accessibility)
• Small RNA sequencing
• Cancer genomics data
• Viral genomes
• Plant & microbial sequencing projects
SRA is perfect if you want to learn the full workflow:
raw reads → QC → alignment → quantification → analysis.
Practice ideas:
• download a tiny RNA-seq dataset and build the entire pipeline
• test different aligners and compare accuracy
• perform QC and spot problematic samples
• analyze a small bacterial WGS dataset and call variants
Nothing builds confidence like handling real messy FASTQs.
π¨ 2. GEO (Gene Expression Omnibus)
Best for: processed matrices, DGEA, expression analysis, scRNA-seq
GEO is where experiments come already half-cooked, so beginners don’t drown in preprocessing.
You’ll find:
• differential gene expression datasets
• microarray data (gene-level expression tables)
• RNA-seq count matrices
• scRNA-seq processed matrices
• clinical metadata
• perturbation studies
• disease vs. control expression profiles
This is the perfect place to practice actual analysis — clustering, PCA, normalization, biomarkers.
Practice ideas:
• pick a disease dataset and find differentially expressed genes
• build a heatmap of top markers
• classify samples using ML based on gene expression
• analyze drug treatment effects
GEO is endlessly rich — every dataset is a new story.
𧬠3. ENCODE (Encyclopedia of DNA Elements)
Best for: regulatory genomics, epigenetics, transcription factor binding
If you want to feel like an advanced researcher, ENCODE is your universe.
This is not beginner-level fluff — it’s real, detailed, multi-omics data.
ENCODE includes:
• TF binding ChIP-seq
• Histone modification maps
• DNA accessibility (ATAC-seq, DNase-seq)
• Methylation data
• CRISPR screens
• Enhancer & promoter annotations
• High-resolution regulatory landscapes
Everything here teaches you how the genome actually functions.
Practice ideas:
• discover active regulatory regions in a cell type
• compare H3K27ac peaks across tissues
• analyze TF binding near oncogene promoters
• link enhancer peaks to gene expression
One dataset here can teach you more than a whole semester.
π 4. Kaggle Bioinformatics Datasets
Best for: machine learning practice, clean datasets, competitions
Kaggle is where bioinformatics meets ML in the friendliest way.
You’ll find:
• protein classification
• DNA sequence classification
• gene expression prediction
• metagenomics species identification
• mutation prediction tasks
• drug response datasets
Kaggle datasets come:
• clean
• labeled
• small enough for beginners
• perfect for ML models
• instantly downloadable
Practice ideas:
• train a CNN on DNA sequences
• use random forests to classify proteins
• build a cancer subtype predictor
• perform feature selection on expression matrices
Kaggle makes you think like an ML scientist without preprocessing pain.
π 5. MG-RAST & EBI Metagenomics
Best for: microbiome, environmental sequencing, microbial ecology
If you love ecosystems, MG-RAST and EBI Metagenomics are treasure troves.
They contain:
• soil microbiome
• ocean microbiome
• gut microbiome
• marine viruses
• plant-associated microbes
• extreme environment communities (hot springs, deep sea, etc.)
• functional pathway profiles
• taxonomic compositions
These datasets are noisy, diverse, and incredibly fun.
Practice ideas:
• identify microbial species from raw reads
• predict functional profiles using HUMAnN
• compare microbiomes across environments
• classify samples (e.g., soil vs. ocean vs. human gut)
• build co-occurrence networks
Metagenomics teaches you how messy real biology can be — in the best way.
π¨ 6. Single-Cell Repositories
Best for: scRNA-seq clustering, pseudotime, development biology
Single-cell data lets you study biology one cell at a time — the closest thing to watching life think.
Major portals:
• Human Cell Atlas
• Broad Institute Single Cell Portal
• Chan Zuckerberg cellxgene
• NCBI GEO single-cell submissions
These provide ready-to-use:
• count matrices
• metadata
• annotations
• cell-type labels
Practice ideas:
• cluster cells and identify cell types
• find marker genes for each cluster
• build a pseudotime trajectory
• analyze immune cell diversity
• compare tumor vs. normal microenvironments
Single-cell genomics lets students feel like experts instantly.
Practice Workflows (Step-by-step)
1) RNA-seq End-to-End Pipeline (Beginner friendly — Galaxy or Colab)
Goal: FASTQ → gene counts → differential expression → plots
Tools: FastQC, Trimmomatic/Cutadapt, HISAT2 (or STAR), featureCounts (or Salmon/kallisto), DESeq2 (R)
Small dataset: pick a 4–6 sample RNA-seq SRR from SRA (small paired reads)
Steps
-
Download data
-
Galaxy: import by SRR accession (direct tool).
-
Colab / local: use
prefetch+fasterq-dump(SRA toolkit).
-
QC — FastQC + MultiQC
-
Run
fastqc sample_R1.fastq sample_R2.fastq→ look at per-base quality, adapters, overrepresented sequences. -
Combine with
multiqc .for many samples.
What to check: per-base quality (low at ends), adapter peaks, GC anomalies.
-
Trim adapters & low quality
-
trimmomatic PE sample_R1.fastq sample_R2.fastq out_R1_paired.fastq out_R1_unpaired.fastq ...
or usecutadaptin Colab. -
Re-run FastQC to verify improvement.
-
Align reads (HISAT2 or STAR)
-
Build or download reference index (human/mouse) — HISAT2 indices are standard.
Tip: For small projects, HISAT2 is lighter than STAR.
-
Quantify — featureCounts (or use Salmon/kallisto for pseudoalignment)
-
Output: gene × sample counts matrix.
-
Differential Expression — DESeq2 (R/RStudio Cloud)
-
Load counts into R:
-
Make MA plot, volcano plot, and heatmap of top DEGs. Use
pheatmapandEnhancedVolcano.
Expected outputs: normalized counts, list of DEGs with log2FC and adjusted p-values.
Pitfalls & tips
-
Always check library sizes and sample clustering (PCA) before claiming biological results.
-
Use replicates (≥2 per group) — singletons are unreliable.
-
If running in Colab, use small datasets (<2–4 paired samples) to stay within RAM.
2) Variant Calling (Toy VCF Project)
Goal: small region (chr22), call SNPs, annotate
Tools: bwa, samtools, bcftools, vcftools, SnpEff or VEP
Steps
-
Get reads — a small WGS or targeted run for chr22.
-
Index reference
-
Align
-
Mark duplicates (optional)
-
Call variants (bcftools)
-
Annotate (SnpEff or VEP)
Outputs: filtered VCF with high-confidence SNPs and annotations (impact, gene).
Tips & pitfalls
-
Coverage matters: low coverage → many false positives. For practice, choose datasets with ≥10–20× local depth.
-
Visualize with IGV: load BAM + VCF to inspect individual variant evidence.
-
Keep filters conservative for beginners (QUAL ≥ 20, DP ≥ 10).
3) scRNA-seq Mini Project (Seurat; 2k PBMC)
Goal: basic scRNA pipeline on small PBMC dataset
Tools: R + Seurat (or Scanpy in Python) — RStudio Cloud works well.
Steps
-
Get a small PBMC dataset (2k cells) — many are provided in Seurat tutorials.
-
Load and preprocess (Seurat)
-
Marker identification
-
Interpret clusters — map marker genes to cell types (e.g., CD3D → T cells).
Tips
-
Filter mitochondrial % (
percent.mt) to remove dead/dying cells. -
If memory is tight, downsample cells or limit to top variable genes.
-
Use Seurat vignettes for example code; RStudio Cloud runs small datasets smoothly.
4) Metagenomics Pipeline (Galaxy or Colab)
Goal: QC → taxonomic classification → visualization
Tools: FastQC, Trimmomatic, Kraken2 (or MetaPhlAn), Bracken, Krona
Steps
-
QC & trim as in RNA-seq pipeline.
-
Classify reads
-
Kraken2 (k-mer approach):
-
Use Bracken to estimate species abundances from Kraken output.
-
Visualize
-
Krona to make interactive pie charts from Kraken report:
-
Or in Python: stacked barplots or PCoA using Bray–Curtis distances.
What to check
-
Relative abundance tables (species × sample).
-
Diversity metrics: Shannon, Simpson.
-
Compare groups with ordination (PCoA) and PERMANOVA.
Pitfalls
-
Contamination (lab/reagent) is common — include negatives if available.
-
Short reads → ambiguous assignments; use conservative confidence thresholds.
5) Machine Learning in Bioinformatics (Colab — quick, practical)
Goal: small ML projects you can run in Colab (free GPU optional)
Examples: gene expression → cancer type; protein sequence → function; microbiome → habitat
Template workflow (gene expression → classify cancer subtype)
-
Prepare data
-
Use a small counts matrix or processed GEO dataset with labels (cancer types).
-
Normalize (log1p), optionally z-score features.
-
Feature selection
-
Keep top variable genes (e.g., top 1000) to reduce dimensionality.
-
Or use PCA to reduce to 50 components.
-
Train/test split
-
Modeling
-
Start with RandomForest / XGBoost for tabular omics.
-
Evaluate with accuracy, ROC-AUC, confusion matrix.
-
Interpretation
-
Feature importance (RandomForest) or SHAP values for explainability.
Protein sequence classification (quick guide)
-
Encode sequences as k-mers or use precomputed embeddings (ProtBERT if available).
-
Feed into simple CNN or RandomForest.
-
Evaluate precision/recall per class (often imbalanced).
Microbiome abundance → prediction
-
Use relative abundance vectors (after CLR transform for compositional data).
-
Models: RandomForest, logistic regression with L1 regularization.
-
Evaluate and check feature importance (which taxa drive predictions).
Tips for Colab
-
Use small datasets (thousands of features × hundreds of samples) to avoid timeouts.
-
Save models/results to Google Drive to prevent data loss.
-
Use
scikit-learnfor fast prototyping; move to TensorFlow/PyTorch for deep models.
General Practical Tips (for all workflows)
-
Start small. Test pipelines on 1–3 samples before scaling.
-
Use reproducible environments. Save conda/pip requirements or use Colab notebooks.
-
Keep metadata tidy. Sample labels, conditions, and batch info are essential.
-
Visualize early. PCA/UMAP and QC plots often reveal problems before modeling.
-
Document everything. Note commands and parameters — reproducibility matters.
-
Backup outputs. Upload intermediate files to Google Drive or your account on Galaxy.
How Experts Practice Without HPC
There’s a funny misconception floating around in the bioinformatics world — that experts must be sitting on top of monster servers, GPUs roaring like dragons, RAM the size of a small planet.
In reality?
Experts are the most resource-efficient people in the room.
They don’t waste compute.
They don’t run full datasets blindly.
They don’t cry over RAM errors because they rarely reach them.
Here’s how the pros actually work, and how you can copy every technique right now on your own laptop or Colab.
1. They always start small — tiny sample subsets
Experts rarely jump into the full dataset. They test pipelines on:
• 1 disease sample + 1 control
• first 100k reads from a FASTQ
• 200–500 cells in a scRNA dataset
This gives them a fast feedback loop.
Why this works:
If your alignment fails, or featureCounts throws an error, or your Seurat pipeline collapses from missing metadata… you learn it immediately, not after waiting two hours.
Beginners waste compute.
Experts waste nothing.
2. They test pipelines on 2–3 samples first
Before running anything big, experts do a “pilot run.”
For example:
• test HISAT2 on chr22 only
• run DESeq2 on 3 tiny count files
• check one VCF through bcftools filters
• try normalization using just one Seurat cluster
This “pilot-first” method reveals:
• badly formatted metadata
• mismatched read groups
• wrong genome index
• missing annotation files
The small mistakes surface early, not after a long run.
3. They use lightweight tools instead of heavy alignment
Instead of full alignment to the genome, experts use tools built for speed:
• salmon — quantifies RNA-seq without alignment
• kallisto — superfast pseudoalignment
• minimap2 — efficient for long reads
• STARsolo — scRNA-seq optimized mode
These tools are almost magical.
They generate results with high accuracy and a fraction of the resources.
Why?
They skip the expensive step of full alignment and jump straight to matching transcripts.
Even a 4 GB RAM laptop handles this like a pro.
4. They use Google Colab for benchmarking & heavy steps
Colab = free CPU + free GPU + free RAM.
Experts use it like a temporary lab.
Typical expert workflow:
• download small subset to local
• prepare pipeline
• upload to Colab
• test scaling
• run heavy parts like STAR or big ML models
Beginners underestimate Colab.
Experts squeeze it dry.
5. They downsample FASTQ files to make testing fast
Using seqtk or samtools view, they reduce dataset sizes artificially.
For example:
This keeps:
• real structure
• real errors
• real complexity
…at 1–5% of the size.
Result:
They can debug alignment and QC without waiting for hours.
6. They load only the top variable genes in scRNA-seq
Full scRNA datasets often have:
• 20k genes
• 50k cells
Trying to plot the whole thing melts laptops.
Experts avoid that.
They use:
• top 2000 variable genes
• PCA on reduced matrices
• clustering with smaller feature spaces
This speeds up:
• normalization
• PCA
• UMAP
• clustering
…and reduces RAM from 12 GB → 2 GB.
Beginners load everything and crash.
Experts load only what matters.
7. They skip alignment entirely when possible
This is a secret superpower.
If the biological question doesn’t need it, they avoid alignment by using:
• salmon
• kallisto
• pseudoalignment
• reference transcriptomes only
Because alignment is the slowest, most RAM-hungry step.
Skipping it saves hours.
8. They use cloud notebooks to avoid local RAM limits
When notebooks get heavy, pros shift instantly to:
• Google Colab
• Kaggle Notebooks
• Saturn Cloud
• CodeOcean
These handle:
• big matrices
• scRNA-seq objects
• ML pipelines
• PCA
• UMAP
…without crashing your laptop.
It’s like outsourcing RAM to the universe.
Why This Matters
Every great bioinformatician learns the same lesson early:
Raw skill beats raw hardware.
When you think cleverly, you don’t need HPC.
You need brains, strategy, and the discipline to avoid wasteful computing.
The best part?
Every trick above is yours now.
You can practice like an expert even before you become one — and the path ahead opens wider because of it.
Whenever you feel stuck or overwhelmed, remember:
Your laptop is more powerful than you think, especially when guided by smart habits.
And there’s a whole world of clever workflows waiting to bloom inside your fingertips.
What to Focus On (If You Want True Mastery)
What separates a casual bioinformatics learner from someone who becomes dangerously good isn’t hardware or software… it’s mindset.
The strongest people in this field aren’t the ones with the biggest server — they’re the ones who can look at a dataset and feel what’s wrong, what’s missing, and what the biology is whispering underneath the noise.
Here’s the deeper, richer version of each skill… the kind that actually makes you a master.
1. QC Intuition — the superpower everyone underestimates
QC is not a checklist.
It’s a sense.
It grows slowly, and then suddenly you can “see” noise the way a musician hears when a note is off.
This intuition helps you catch things like:
• a weird spike in read quality that suggests adapter contamination
• cells clustering by sequencing batch, not biology
• an unexpected drop in mapping rates
• strange GC biases hinting at library prep issues
• VCF files where variants cluster suspiciously on one strand
This instinct is what protects you from false results.
It’s the heart of bioinformatics.
2. Statistics — the anchor beneath all the algorithms
Machine learning is flashy, but statistics is the spine.
Understanding:
• p-values
• variance
• normalization
• distributions
• false discovery rates
• heteroscedasticity (when variance changes with expression level)
…will let you interpret results instead of blindly accepting them.
A person who understands statistics can defeat fancy ML models with a single well-done differential expression analysis.
3. Visualization — turning chaos into stories
Plots aren’t decorations; they’re diagnostics.
Learn to read:
• PCA
• UMAP
• t-SNE
• heatmaps
• volcano plots
• coverage plots
• alignment browsers (IGV)
A single PCA plot can expose a bad sample.
A volcano plot can reveal a biologically impossible gene.
A UMAP can whisper “your clustering is fake.”
Visualization turns numbers into insight.
4. Pipelines — your ability to build complete stories
Experts don’t run random commands.
They build pipelines that turn raw FASTQ files into biological conclusions.
A strong pipeline thinker knows:
• how to chain tools
• how to validate outputs at each step
• how parameters affect results
• how to debug mismatches
• how to automate reproducibility
Pipelines are how raw data becomes knowledge.
5. Scripting — your freedom from clicking buttons
Whether Python, R, or bash — scripting is the difference between:
“I hope this works again…”
and
“I can reproduce this forever.”
Scripting teaches you:
• logic
• structure
• automation
• reproducibility
• scalability
• clarity
Even simple scripts give you complete control over data.
6. Reproducibility — the sign of a real scientist
Anyone can get results.
Only experts can reproduce them.
That means:
• version control
• documented parameters
• organized folders
• clear notebooks
• fixed random seeds
• saved environments
• clean metadata
Reproducibility is your professional signature.
7. Pattern Recognition — the quiet art of noticing
A seasoned bioinformatician starts noticing patterns the way astrophysicists notice faint stars.
• “This coverage drop looks like a deletion.”
• “These genes are mitochondrial… maybe the cells are dying.”
• “This VCF looks too dense — these might be false positives.”
• “This cluster is too sharp… maybe over-clustering happened.”
Pattern recognition doesn’t come from textbooks.
It comes from touching data again and again until it feels familiar.
8. Biological Reasoning — the soul of the entire field
Computers don’t understand biology.
You do.
And the more biological thinking you develop, the stronger your analyses become.
For example:
• immune cells shouldn’t express neuronal markers
• housekeeping genes shouldn’t fluctuate wildly
• certain mutations only appear in specific cancers
• mitochondrial genes hint at cell stress
• batch effects often follow sample collection dates
Biology gives meaning to data.
Without biological reasoning, results become mathematical hallucinations.
Why This Matters
If you master these eight pillars, you won’t just “run tools.”
You’ll understand data the way a seasoned traveler understands the terrain.
You’ll catch mistakes before they happen.
You’ll design cleaner pipelines, trustable results, and stronger models.
Skills stay with you forever — laptops and servers don’t.
And with these skills, even a simple machine becomes a powerful scientific instrument in your hands.
Conclusion
The myth that bioinformatics requires expensive hardware has scared away so many brilliant minds. The reality is far simpler and far friendlier: you can grow into a skilled, confident bioinformatician with nothing more than curiosity, Wi-Fi, and a laptop that can open a browser without crying.
Every expert you admire started small.
Most learned their first pipelines on ordinary machines.
Skill, not servers, is what shapes mastery.
The future of bioinformatics is bright precisely because people like you can enter it without barriers. When you know how to think, how to question, how to QC, how to interpret — you become powerful in ways no GPU can replace.
Your laptop is enough.
Your free tools are enough.
Your persistence is more than enough.
And the moment you realize that, your learning curve becomes unstoppable.
Comments Section
π Your story matters:
Have you ever felt blocked because you didn’t have HPC?
What did you try? How did you overcome it?
πDrop your thoughts below —