Matrix Inverse Calculator
Calculate the inverse of any square matrix. Enter your matrix below and get instant results with detailed step-by-step solutions using Gauss-Jordan elimination.
Matrix A
Inverse A-1
How to Calculate Matrix Inverse
Invertible Condition
Matrix must be square and have non-zero determinant (det(A) ≠ 0) to be invertible.
Gauss-Jordan Method
Augment matrix with identity matrix [A|I] and reduce to [I|A⁻¹] using row operations.
Adjugate Method
Calculate A⁻¹ = (1/det(A)) × adj(A) using cofactor matrix and determinant.
Verification
Check result: A × A⁻¹ = I (identity matrix). This confirms the inverse is correct.
Mathematical Theory & History
What is Matrix Inverse?
The inverse of a square matrix A, denoted A⁻¹, is a matrix such that A × A⁻¹ = A⁻¹ × A = I, where I is the identity matrix. Not all matrices have inverses; only square matrices with non-zero determinants are invertible (also called non-singular matrices).
Definition: A × A⁻¹ = I
Condition: det(A) ≠ 0
Formula: A⁻¹ = (1/det(A)) × adj(A)
Historical Background
The concept of matrix inverse emerged from the study of linear equations. Carl Friedrich Gauss developed the elimination method in the early 1800s, which became the foundation for modern matrix inversion algorithms.
Camille Jordan later refined Gauss's method, leading to the Gauss-Jordan elimination we use today. The systematic study of matrix inverses was further developed by Arthur Cayley in his foundational work on matrix algebra in the 1850s.
Properties of Matrix Inverse
Uniqueness
If A⁻¹ exists, it is unique
Inverse of Inverse
(A⁻¹)⁻¹ = A
Product Rule
(AB)⁻¹ = B⁻¹A⁻¹
Transpose Rule
(A⁻¹)ᵀ = (Aᵀ)⁻¹
Real-World Applications
Linear Systems
Solving Ax = b using x = A⁻¹b when A is invertible
Computer Graphics
Reversing transformations, camera matrices, and coordinate conversions
Engineering
Control systems, signal processing, and structural analysis
Economics & Finance
Portfolio optimization, input-output models, and econometric analysis
Frequently Asked Questions
A matrix has an inverse if and only if it is square (n×n) and has a non-zero determinant (det(A) ≠ 0). Such matrices are called invertible or non-singular. If det(A) = 0, the matrix is singular and has no inverse.
Multiply the original matrix A by the calculated inverse A⁻¹. If the result is the identity matrix I (ones on the diagonal, zeros elsewhere), then your inverse is correct: A × A⁻¹ = I.
Gauss-Jordan elimination is most common and works by augmenting [A|I] and reducing to [I|A⁻¹]. The adjugate method uses A⁻¹ = (1/det(A)) × adj(A) and is better for understanding theory. LU decomposition is efficient for large matrices.
No, only square matrices can have true inverses. However, non-square matrices can have pseudo-inverses (Moore-Penrose inverse) which provide the "best" approximate solution to linear systems.
Your matrix is non-invertible (singular) because its determinant is zero. This happens when rows or columns are linearly dependent - one row/column can be expressed as a combination of others. Check for duplicate or proportional rows/columns.
Geometrically, a non-invertible matrix represents a transformation that "collapses" space - it reduces dimensionality. For example, a 2×2 singular matrix might map the entire plane onto a line, making the transformation irreversible.
For the system Ax = b, if A is invertible, the solution is x = A⁻¹b. However, in practice, it's more efficient to use methods like LU decomposition or Gaussian elimination rather than explicitly computing A⁻¹.