About OncoVision Universal
How our AI-powered tissue analysis works
AI-Powered Histopathology Analysis
OncoVision Universal uses deep learning to analyze tissue images and classify cellular patterns β helping researchers accelerate cancer detection workflows.
How It Works
Upload histopathology images (H&E stained tissue slides, biopsies, or cell samples) to a project dataset.
Our DenseNet-121 neural network processes each image through 121 convolutional layers, extracting tissue features at multiple scales.
The model outputs probabilities across 7 tissue classes β from normal to malignant pattern-like β with confidence scoring.
Grad-CAM generates an attention heatmap showing exactly which tissue regions influenced the classification decision.
7 Tissue Classifications
π§ Model & Training
DenseNet-121 β a 121-layer convolutional neural network with dense connections between layers. Pre-trained on ImageNet (1.2M images) and adapted with a custom 7-class tissue classification head.
GPU-accelerated inference powered by NVIDIA RTX 3080 Ti (12GB VRAM). Each image is analyzed in under 500ms.
The model is actively being trained on histopathology datasets. Each analysis contributes to our training pipeline β as more tissue images are processed and validated by pathologists, the model's accuracy and confidence improve over time. Future milestones include fine-tuning on PCam and BRACS benchmark datasets.
𧬠Taxonomy β Classification Categories
The analysis engine classifies tissue across three dimensions. These categories are managed in the Taxonomy section.
The taxonomy is derived from established medical and computational pathology standards:
- Tissue Classes β Based on the WHO Classification of Tumours (ICD-O-3 morphology codes) and the College of American Pathologists (CAP) cancer reporting protocols. Simplified into 7 broad diagnostic categories for the DenseNet-121 classifier.
- Target Organs β Selected from the most common cancer sites tracked by the SEER (Surveillance, Epidemiology, and End Results)program, covering >85% of cancer diagnoses.
- Imaging Modalities β Standard staining and imaging techniques used in clinical histopathology labs worldwide, following CAP laboratory accreditation guidelines.
The 7 output categories of the DenseNet-121 classifier:
When an image is uploaded, the user can tag it with an organ and imaging modality. This metadata travels with the image through the pipeline.
The DenseNet-121 model outputs a probability for each of the 7 tissue classes. The model's final layer maps directly to these taxonomy categories.
The top predicted class determines the recommendation text, urgency level, and whether the case is flagged for expert review.
πΊοΈ Grad-CAM: Why We Show Heatmaps
Grad-CAM (Gradient-weighted Class Activation Mapping) is an explainability technique that reveals which regions of a tissue image the model focused on when making its classification decision.
This is critical for clinical trust β pathologists can verify whether the AI is looking at the right cellular structures rather than artifacts or background noise.
Red/yellow regions drove the classification.
Blue/green regions had minimal influence.
OncoVision Universal is a research platform. All outputs are probabilistic and may be incorrect. Results should never be used for clinical diagnosis without expert pathologist review. The model is under active development and its accuracy is continuously improving through training on validated datasets.