Advances in microscopy techniques have led to huge increases in the amount of imaging data being collected, prompting the use of deep-learning methods for image analysis. Deep-learning methods, known as convolutional neural networks (CNNs), have been used to automate segmentation and classification of phenotypes in developmental biology but have not been widely adopted by researchers without a computational background. In this Issue, Soeren Lienkamp and colleagues debunk some common misconceptions associated with CNNs, such as the requirements for expert computational knowledge, huge training datasets and specific image types, and describe the flexibility of U-Net (a Fiji plug-in) – their CNN architecture of choice. Using various models of human congenital diseases, the authors analyse renal, neural and craniofacial phenotypes in Xenopus embryos using trained networks. They demonstrate that their U-Net models can identify key morphological features and perform as well as expert annotators when trained on as few as five images of each phenotype. The authors also show that these networks can be applied to different imaging setups, developmental stages and staining protocols with minimal interventions. Together, these data demonstrate the power of CNNs in high-throughput analysis of embryonic development and disease, and increase the accessibility of deep-learning-based analysis to those without a computational background.