重要通知 今晚23:00-24:00 进行维护,期间可能无法访问
Windows安装mamba
windows系统下安装mamba会遇到各种各样的问题。博主试了好几天,把能踩的坑都踩了,总结出了在windows下安装mamba的一套方法,已经给实验室的windows服务器都装上了。只要跟着我的流程走下来,大概率不会出问题,如果遇到其他问题,可以在评论区讨论,我会的我会回复。
首先创建mamba的环境,然后安装必要的库。请你创建一个新环境,而不是用以前的环境,版本这些就跟着这个里面来。
BraTs2023数据集处理及python读取.nii文件
BraTS2023-MEN(Brain Tumor Segmentation 2023 Meningioma Challenge) 是 BraTS2023 五个分割子任务中之一,与 BraTS 常规分割脑胶质瘤不同,该子任务目标是从多模态 MR 图像 (mpMRI) 中分割脑膜瘤。该数据集在 23 年 5 月份放出合计 6 个中心的 1650 例数据,其中有标注的训练集 1000 例,每例提供四种序列 MR 的输入图像(t1w, t1c, t2w, t2f)以及脑膜瘤的分割结果,标注内容主要包括非增强肿瘤核心(NETC)、周围非增强的FLAIR高信号(SNFH)和增强型肿瘤(ET)。验证集提供图像但没有标注,可以在官网提交验证,而测试集数据不公开。
BraTs2019数据集处理及python读取.nii文件
BraTS 是MICCAI脑肿瘤分割比赛的数据集,BraTs 2018中的训练集( training set) 有285个病例
每个病例有四个模态(t1、t2、flair、t1ce),需要分割三个部分:whole tumor(WT), enhance tumor(ET), and tumor core(TC),相当于三个label。
每例病例中包含4种模态的MRI序列和1个seg文件,所有序列尺寸全部为(240, 240, 155),如下图:在这里插入图片描述
多模态医学图像数据集
来自The Brain Tumor Segmentation Challenge 2019的BraTS2019数据集包含335个标记的MRI图像。每个病例都有四种模式:T1、T1Gd、T2和FLAIR。数据集的标注区域包括三个肿瘤亚区:水肿(ED)、增强肿瘤(ET)和非增强肿瘤(NET)。相应的分割目标roi为增强肿瘤区域(ET)、肿瘤核心区域(TC = ET+NET)和整个肿瘤区域(WT = ED + ET+NET)。
Multi-modal disease segmentation with continual learning and adaptive decision fusion
Multi-modal disease segmentation is essential for the diagnosis and treatment of patients. Advanced algorithms have been proposed, however, two challenging issues remain unsolved, i.e., lacked knowledge share and limited modal relation. To this end, we develop a novel framework for multi-modal disease segmentation. It is based on improved continual learning and adaptive decision fusion. Specifically, continual learning with 𝑘-means sampling is developed to highlight knowledge share from multi-modal medical images. In addition, we propose an adaptive decision fusion technique that uses the Naive Bayesian algorithm to improve the relationship between different modalities. To evaluate our proposed model, we chose two typical tasks, i.e., myocardial pathology segmentation and brain tumor segmentation. Four benchmark datasets, i.e., myocardial pathology segmentation challenge 2020 (MyoPS 2020), brain tumor segmentation challenge 2018 (BraTS 2018), BraTS 2019, and BraTS 2020, are utilized to train and test our framework. Both the qualitative and quantitative results demonstrate that our proposed model is effective and has advantages over peer state-of-the-art (SOTA) methods.
Multi-modality medical image segmentation via adversarial learning with CV energy functional
Medical image processing methods based on deep learning have gradually become mainstream. Automatic segmentation of brain tumor from multi-modality magnetic resonance images (MRI) using deep learning method is the key to the diagnosis of gliomas. In our hybrid network, the proposed neural network framework consists of Segmentor and Critic. A new Transformer-CV-Unet (TCUnet) is introduced to gain more semantic features. We employ the new TCUnet as the generator of GAN to complete the segmentation task to increase robustness and efficiency. With a generator to segment the target images, Critic is then built to tightly merge the latent representation with hierarchical characteristics from each modality. Moreover, a hybrid adversarial with multi-phase CV energy functional is introduced. Our hybrid network, AdvTCUnet, combines the advantages of both methods. Furthermore, extensive experiments on BraTs 19–21 show that the proposed model performs better than existing state-of-the-art techniques for segmenting brain tumor MRI (e.g., the Dice Similarity Coefficient of ET, WT and TC on BraTs 21 can reach 0.8642, 0.9303 and 0.9060, respectively).
A Novel 3D Unsupervised Domain Adaptation Framework for Cross-Modality Medical Image Segmentation
We consider the problem of volumetric (3D) unsupervised domain adaptation (UDA) in cross-modality medical image segmentation, aiming to perform segmentation on the unannotated target domain (e.g. MRI) with the help of labeled source domain (e.g. CT). Previous UDA methods in medical image analysis usually suffer from two challenges: 1) they focus on processing and analyzing data at 2D level only, thus missing semantic information from the depth level; 2) one-to-one mapping is adopted during the style-transfer process, leading to insufficient alignment in the target domain. Different from the existing methods, in our work, we conduct a first of its kind investigation on multi-style image translation for complete image alignment to alleviate the domain shift problem, and also introduce 3D segmentation in domain adaptation tasks to maintain semantic consistency at the depth level. In particular, we develop an unsupervised domain adaptation framework incorporating a novel quartet self-attention module to efficiently enhance relationships between widely separated features in spatial regions on a higher dimension, leading to a substantial improvement in segmentation accuracy in the unlabeled target domain. In two challenging cross-modality tasks, specifically brain structures and multi-organ abdominal segmentation, our model is shown to outperform current state-of-the-art methods by a significant margin, demonstrating its potential as a benchmark resource for the biomedical and health informatics research community.
MATR Multimodal Medical Image Fusion via Multiscale Adaptive Transformer
Owing to the limitations of imaging sensors, it is challenging to obtain a medical image that simultaneously contains functional metabolic information and structural tissue details. Multimodal medical image fusion, an effective way to merge the complementary information in different modalities, has become a significant technique to facilitate clinical diagnosis and surgical navigation. With powerful feature representation ability, deep learning (DL)-based methods have improved such fusion results but still have not achieved satisfactory performance. Specifically, existing DL-based methods generally depend on convolutional operations, which can well extract local patterns but have limited capability in preserving global context information. To compensate for this defect and achieve accurate fusion, we propose a novel unsupervised method to fuse multimodal medical images via a multiscale adaptive Transformer termed MATR. In the proposed method, instead of directly employing vanilla convolution, we introduce an adaptive convolution for adaptively modulating the convolutional kernel based on the global complementary context. To further model long-range dependencies, an adaptive Transformer is employed to enhance the global semantic extraction capability. Our network architecture is designed in a multiscale fashion so that useful multimodal information can be adequately acquired from the perspective of different scales. Moreover, an objective function composed of a structural loss and a region mutual information loss is devised to construct constraints for information preservation at both the structural-level and the feature-level. Extensive experiments on a mainstream database demonstrate that the proposed method outperforms other representative and state-of-the-art methods in terms of both visual quality and quantitative evaluation. We also extend the proposed method to address other biomedical image fusion issues, and the pleasing fusion results illustrate that MATR has good generalization capability. The code of the proposed method is available at https://github.com/tthinking/MATR.
Hybrid cross-modality fusion network for medical image segmentation with contrastive learning
Medical image segmentation has been widely adopted in artificial intelligence-based clinical applications. The integration of medical texts into image segmentation models has significantly improved the segmentation performance. It is crucial to design an effective fusion manner to integrate the paired image and text features. Existing multi-modal medical image segmentation methods fuse the paired image and text features through a non-local attention mechanism, which lacks local interaction. Besides, they lack a mechanism to enhance the relevance of the paired features and keep the discriminability of unpaired features in the training process, which limits the segmentation performance. To solve the above problem, we propose a hybrid cross-modality fusion network (HCFNet) based on contrastive learning for medical image segmentation. The key designs of our proposed method are a multi-stage cross-modality contrastive loss and a hybrid cross-modality feature decoder. The multi-stage cross-modality contrastive loss is utilized to enhance the discriminability of the paired features and separate the unpaired features. Furthermore, the hybrid cross-modality feature decoder conducts local and non-local cross-modality feature interaction by a local cross-modality fusion module and a non-local cross-modality fusion module, respectively. Experimental results show that our method achieved state-of-the-art results on two public medical image segmentation datasets.