In commutative algebra, the extension and contraction of ideals are operations performed on sets of ideals.
Let A and B be two commutative rings with unity, and let f : A → B be a (unital) ring homomorphism. If 
  
    
      
        
          
            a
          
        
      
    
    
   is an ideal in A, then 
  
    
      
        f
        (
        
          
            a
          
        
        )
      
    
    
   need not be an ideal in B (e.g. take f to be the inclusion of the ring of integers Z into the field of rationals Q). The extension 
  
    
      
        
          
            
              a
            
          
          
            e
          
        
      
    
    
   of 
  
    
      
        
          
            a
          
        
      
    
    
   in B is defined to be the ideal in B generated by 
  
    
      
        f
        (
        
          
            a
          
        
        )
      
    
    
  . Explicitly,
  
    
      
        
          
            
              a
            
          
          
            e
          
        
        =
        
          
            {
          
        
        ∑
        
          y
          
            i
          
        
        f
        (
        
          x
          
            i
          
        
        )
        :
        
          x
          
            i
          
        
        ∈
        
          
            a
          
        
        ,
        
          y
          
            i
          
        
        ∈
        B
        
          
            }
          
        
      
    
    
  
If 
  
    
      
        
          
            b
          
        
      
    
    
   is an ideal of B, then 
  
    
      
        
          f
          
            −
            1
          
        
        (
        
          
            b
          
        
        )
      
    
    
   is always an ideal of A, called the contraction 
  
    
      
        
          
            
              b
            
          
          
            c
          
        
      
    
    
   of 
  
    
      
        
          
            b
          
        
      
    
    
   to A.
Assuming f : A → B is a unital ring homomorphism, 
  
    
      
        
          
            a
          
        
      
    
    
   is an ideal in A, 
  
    
      
        
          
            b
          
        
      
    
    
   is an ideal in B, then:
  
    
      
        
          
            b
          
        
      
    
    
   is prime in B 
  
    
      
        ⇒
      
    
    
   
  
    
      
        
          
            
              b
            
          
          
            c
          
        
      
    
    
   is prime in A.
  
    
      
        
          
            
              a
            
          
          
            e
            c
          
        
        ⊇
        
          
            a
          
        
      
    
    
  
  
    
      
        
          
            
              b
            
          
          
            c
            e
          
        
        ⊆
        
          
            b
          
        
      
    
    
  
It is false, in general, that 
  
    
      
        
          
            a
          
        
      
    
    
   being prime (or maximal) in A implies that 
  
    
      
        
          
            
              a
            
          
          
            e
          
        
      
    
    
   is prime (or maximal) in B. Many classic examples of this stem from algebraic number theory. For example, embedding 
  
    
      
        
          Z
        
        →
        
          Z
        
        
          [
          i
          ]
        
      
    
    
  . In 
  
    
      
        B
        =
        
          Z
        
        
          [
          i
          ]
        
      
    
    
  , the element 2 factors as 
  
    
      
        2
        =
        (
        1
        +
        i
        )
        (
        1
        −
        i
        )
      
    
    
   where (one can show) neither of 
  
    
      
        1
        +
        i
        ,
        1
        −
        i
      
    
    
   are units in B. So 
  
    
      
        (
        2
        
          )
          
            e
          
        
      
    
    
   is not prime in B (and therefore not maximal, as well). Indeed, 
  
    
      
        (
        1
        ±
        i
        
          )
          
            2
          
        
        =
        ±
        2
        i
      
    
    
   shows that 
  
    
      
        (
        1
        +
        i
        )
        =
        (
        (
        1
        −
        i
        )
        −
        (
        1
        −
        i
        
          )
          
            2
          
        
        )
      
    
    
  , 
  
    
      
        (
        1
        −
        i
        )
        =
        (
        (
        1
        +
        i
        )
        −
        (
        1
        +
        i
        
          )
          
            2
          
        
        )
      
    
    
  , and therefore 
  
    
      
        (
        2
        
          )
          
            e
          
        
        =
        (
        1
        +
        i
        
          )
          
            2
          
        
      
    
    
  .
On the other hand, if f is surjective and 
  
    
      
        
          
            a
          
        
        ⊇
        
          
            
              k
              e
              r
            
          
        
        f
      
    
    
   then:
  
    
      
        
          
            
              a
            
          
          
            e
            c
          
        
        =
        
          
            a
          
        
      
    
    
   and 
  
    
      
        
          
            
              b
            
          
          
            c
            e
          
        
        =
        
          
            b
          
        
      
    
    
  .
  
    
      
        
          
            a
          
        
      
    
    
   is a prime ideal in A 
  
    
      
        ⇔
      
    
    
   
  
    
      
        
          
            
              a
            
          
          
            e
          
        
      
    
    
   is a prime ideal in B.
  
    
      
        
          
            a
          
        
      
    
    
   is a maximal ideal in A 
  
    
      
        ⇔
      
    
    
   
  
    
      
        
          
            
              a
            
          
          
            e
          
        
      
    
    
   is a maximal ideal in B.
Let K be a field extension of L, and let B and A be the rings of integers of K and L, respectively. Then B is an integral extension of A, and we let f be the inclusion map from A to B. The behaviour of a prime ideal 
  
    
      
        
          
            a
          
        
        =
        
          
            p
          
        
      
    
    
   of A under extension is one of the central problems of algebraic number theory.