In machine learning, attention mechanisms enable models to weigh different parts of the input differently when producing output, focusing on more relevant segments. These mechanisms use weighted sums of input features to determine importance, enhancing the handling of long sequences in tasks like language translation. Attention is formulated using queries, keys, and values from the input, leading to more context-sensitive and adaptive models.