πŸ”¬

AI-Powered Histopathology Analysis

OncoVision Universal uses deep learning to analyze tissue images and classify cellular patterns β€” helping researchers accelerate cancer detection workflows.

How It Works

⬆️
STEP 1
Upload

Upload histopathology images (H&E stained tissue slides, biopsies, or cell samples) to a project dataset.

🧠
STEP 2
AI Analysis

Our DenseNet-121 neural network processes each image through 121 convolutional layers, extracting tissue features at multiple scales.

πŸ“Š
STEP 3
Classification

The model outputs probabilities across 7 tissue classes β€” from normal to malignant pattern-like β€” with confidence scoring.

πŸ—ΊοΈ
STEP 4
Heatmap

Grad-CAM generates an attention heatmap showing exactly which tissue regions influenced the classification decision.

7 Tissue Classifications

NormalHealthy tissue with expected morphology
BenignNon-cancerous abnormal growth
Inflammatory / ReactiveImmune response or tissue repair
Dysplastic / AtypicalPre-cancerous cellular changes
Malignant Pattern-LikePatterns consistent with cancer
Metastatic Pattern-LikePatterns suggesting spread from primary site
IndeterminateRequires further evaluation

🧠 Model & Training

Architecture

DenseNet-121 β€” a 121-layer convolutional neural network with dense connections between layers. Pre-trained on ImageNet (1.2M images) and adapted with a custom 7-class tissue classification head.

Hardware

GPU-accelerated inference powered by NVIDIA RTX 3080 Ti (12GB VRAM). Each image is analyzed in under 500ms.

πŸ“ˆ Continuously Improving

The model is actively being trained on histopathology datasets. Each analysis contributes to our training pipeline β€” as more tissue images are processed and validated by pathologists, the model's accuracy and confidence improve over time. Future milestones include fine-tuning on PCam and BRACS benchmark datasets.

🧬 Taxonomy β€” Classification Categories

The analysis engine classifies tissue across three dimensions. These categories are managed in the Taxonomy section.

πŸ“š Where do these categories come from?

The taxonomy is derived from established medical and computational pathology standards:

  • Tissue Classes β€” Based on the WHO Classification of Tumours (ICD-O-3 morphology codes) and the College of American Pathologists (CAP) cancer reporting protocols. Simplified into 7 broad diagnostic categories for the DenseNet-121 classifier.
  • Target Organs β€” Selected from the most common cancer sites tracked by the SEER (Surveillance, Epidemiology, and End Results)program, covering >85% of cancer diagnoses.
  • Imaging Modalities β€” Standard staining and imaging techniques used in clinical histopathology labs worldwide, following CAP laboratory accreditation guidelines.
🫁 Target Organs
BreastDuctal & lobular tissue
LungBronchial & alveolar tissue
ColonColonic mucosa & polyps
ProstateGlandular tissue
SkinDermal & epidermal layers
LiverHepatic tissue
KidneyRenal cortex & medulla
ThyroidFollicular architecture
πŸ”¬ Imaging Modalities
H&E StainStandard histology staining
IHCImmunohistochemistry markers
PASPeriodic acid–Schiff staining
Masson TrichromeConnective tissue stain
Frozen SectionRapid intraoperative slides
CytologyCell-level smear samples
FluorescenceFISH & immunofluorescence
🏷️ Tissue Classes

The 7 output categories of the DenseNet-121 classifier:

Normal
Benign
Inflammatory / Reactive
Dysplastic / Atypical
Malignant Pattern-Like
Metastatic Pattern-Like
Indeterminate
βš™οΈ How the Analysis Engine Uses the Taxonomy
1Image Intake

When an image is uploaded, the user can tag it with an organ and imaging modality. This metadata travels with the image through the pipeline.

2Classification

The DenseNet-121 model outputs a probability for each of the 7 tissue classes. The model's final layer maps directly to these taxonomy categories.

3Results & Routing

The top predicted class determines the recommendation text, urgency level, and whether the case is flagged for expert review.

These categories are extensible β€” new organs, modalities, and tissue classes can be added through the Taxonomy manager as the platform expands to cover more tissue types and imaging techniques.

πŸ—ΊοΈ Grad-CAM: Why We Show Heatmaps

Grad-CAM (Gradient-weighted Class Activation Mapping) is an explainability technique that reveals which regions of a tissue image the model focused on when making its classification decision.

This is critical for clinical trust β€” pathologists can verify whether the AI is looking at the right cellular structures rather than artifacts or background noise.

Reading the Heatmap
High attention
Low attention

Red/yellow regions drove the classification.
Blue/green regions had minimal influence.

⚠️
Research Use Only

OncoVision Universal is a research platform. All outputs are probabilistic and may be incorrect. Results should never be used for clinical diagnosis without expert pathologist review. The model is under active development and its accuracy is continuously improving through training on validated datasets.