Large-scale numerical simulations, observations, experiments and AI computations are generating or consuming very large datasets that are difficult to analyze, store and transfer. Data compression is an attractive and efficient technique to significantly reduce the size of scientific datasets. This tutorial reviews the state of the art in lossy compression of scientific datasets, discusses in detail two lossy compressors (SZ and ZFP), compression error assessment metrics and the Z-checker tool to analyze the compression error. The tutorial addresses the following questions: why lossless and lossy compression; how does compression work; how to measure and control compression error; and what are the current use cases of lossy compression. The tutorial uses examples of real-world scientific datasets to illustrate the different compression techniques and their performance. The tutorial is given by two of the leading teams in this domain and targets students, researchers and practitioners interested in lossy compression for scientific data.