The cell types, basement membranes, and connective structures that organize tissues and tumors can be found in length ranges from microscopic organelles to entire organs (0.1 to >104 m). In the study of tissue architecture, hematoxylin, eosin (H&E) and immunohistochemical microscopy have long been the method of choice. Additionally, clinical histopathology continues to be the primary method for diagnosing and treating diseases such as cancer. Classical histology, however, needs to provide more molecular data to properly classify disease genes, analyze developmental pathways, or identify cell subtypes.
Simply identify cell types, assess cell states (resting, proliferating, dying, etc.) and study cell signaling pathways using high-powered imaging. plex of healthy and diseased tissues (also known as spatial proteomics). In a preserved 3D environment, high-plex imaging also exposes the morphologies and locations of cell-free structures necessary for tissue integrity. The resolution, field of view, and diversity (plex) of high-plex imaging techniques vary, but they all provide 2D images of tissue slices that are typically 5-10 µm thick.
Single-cell data produced by segmentation and quantification of multiplexed images perfectly complement single-cell RNA sequencing (scRNASeq) data, which have significantly advanced their understanding of healthy and diseased cells and tissues. However, multiplex tissue imaging retains morphological and spatial information, unlike dissociative RNASeq. However, images of cultured cells, which have so far been the focus of biology-focused machine vision systems, are much more difficult to assess by computer than high-plex imaging data.
Techniques for segmenting metazoan cells have undergone significant development; however, tissue image segmentation presents a more challenging problem due to cell crowding and the variety of cell shapes. As the ubiquitous application of convolutional neural networks (CNNs) in image identification, object detection, and synthetic image production, segmentation algorithms that use machine learning have recently become mainstream. Architectures such as ResNet, VGG16, and more recently UNet and Mask R-CNN have been widely accepted for their ability to learn millions of parameters and generalize over data sets.
Since most cell types have only one nucleus, localization of nuclei is an ideal starting point for segmenting cultured cells and tissues. Nuclear stains with high signal-to-background ratios are also widely available. In the past, researchers have proposed two random forest-based approaches using a cluster of decision trees to assign per-pixel class probabilities to an image using multiple channels for pixel-per-class classification. However, a significant drawback of random forest models is that they are much less able to learn than CNNs. Therefore, much research remains to be done on the potential of using CNNs with multi-channel data to improve kernel segmentation.
The most popular method of extending training data to account for image artifacts is computational augmentation, which involves randomly rotating, shearing, flipping, etc., images before scaling them. pre-treat. This is done to prevent algorithms from retrieving information unrelated to an image, including its orientation. Focus artifacts have so far been eliminated by using a calculated Gaussian blur to supplement the training data. Gaussian blur, however, is only a rough approximation of the blur present in any optical imaging device with restricted bandwidth, such as a real microscope, as well as the consequences of refractive indices and light scattering not consistent.
This research explores methods to improve the accuracy of multiplexed tissue images with typical imaging artifacts and image segmentation using machine learning techniques. By manually selecting a variety of normal tissues and tumors, they create a training and test set with ground truth annotations. They then used this data to measure the segmentation accuracy of three deep learning networks, each of which was trained and tested independently: UNet, Mask R-CNN, and Pyramid Scene Parsing Network (PSPNet). The resulting models are a series of Universal Models for Cell Identification and Tissue Segmentation (UnMICST), each based on a different type of ML network but using the same training data. They found two strategies to increase the segmentation accuracy for the three networks based on their study. The first combines nuclear chromatin photographs stained with DNA intercalating dyes with nuclear envelope staining (NES) photographs. The second includes natural augmentations, defined here as deliberately blurry and oversaturated photos in the training data to reinforce the models compared to the kinds of artifacts seen in real tissue images. They find that the actual data boost significantly outperforms the traditional Gaussian blur boost, statistically significantly improving the robustness of the model. The benefits of including NES data and authentic augmentations are cumulative across different tissue types.
Check paper and coded. All credit for this research goes to the researchers on this project. Also don’t forget to register. our Reddit page and discord channelwhere we share the latest AI research news, cool AI projects, and more.
Aneesh Tickoo is an intern consultant at MarktechPost. He is currently pursuing his undergraduate studies in Data Science and Artificial Intelligence at Indian Institute of Technology (IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He enjoys connecting with people and collaborating on interesting projects.
#Harvards #Latest #Artificial #Intelligence #Study #Finds #Ways #Maximize #Accuracy #Image #Segmentation #Machine #Learning #Algorithms #Multiplexed #Tissue #Images #Common #Imaging #Artifacts