# python – Why does numpy calculate matrix determinant incorrectly

## Question:

``````import numpy as np

A = np.array([[1, 1, 2, -1],
[2, -1, 0, -5],
[-1, -1, 0, -2],
[6, 3, 4, -3]])

print(np.linalg.det(A))
``````

I think so. The determinant must be equal to zero, I checked on a piece of paper and in online calculators. But this code gives such an answer `5.329070518200744e-15` . What am I doing wrong? Maybe he was inattentive somewhere, and if not, how is it better to calculate?

I suppose it may depend only on the versions of Python and especially Numpy . In Google Colaboratory , exactly 0.0 comes out, even if you print 64 decimal places. I tried to set a different data type (the default in this matrix is `numpy.int64` ), for example `numpy.int16` or `numpy.float32` – it doesn't matter, it still comes out 0.0 . But `numpy.float16` cannot be set, `linalg` swears at it that it does not work with it. But check out for fun what type of data you get in the matrix:

``````print(type(A[0,0]))
``````

In Google Colaboratory , these versions are:

``````Python 3.6.9
Numpy 1.18.5
``````

The code I tested everything with:

``````import numpy as np

A = np.array([[1, 1, 2, -1],
[2, -1, 0, -5],
[-1, -1, 0, -2],
[6, 3, 4, -3]] #, dtype=np.float32)
)

print(np.__version__)
print(type(A[0,0]))
print(np.linalg.det(A))
print(f"{np.linalg.det(A):.64f}")
``````

Result:

``````1.18.5
<class 'numpy.int64'>
0.0
0.0000000000000000000000000000000000000000000000000000000000000000
``````
Scroll to Top