Install blas lapack windows

BLAS / LAPACK on Windows¶

Windows has no default BLAS / LAPACK library. By “default” we mean, installed with the operating system.

Numpy needs a BLAS library that has CBLAS C language wrappers.

Here is a list of the options that we know about.

ATLAS¶

The ATLAS libraries have been the default BLAS / LAPACK libraries for numpy binary installers on Windows to date (end of 2015).

ATLAS uses comprehensive tests of parameters on a particular machine to chose from a range of algorithms to optimize BLAS and some LAPACK routines. Modern versions (>= 3.9) perform reasonably well on BLAS benchmarks. Each ATLAS build is optimized for a particular machine (CPU capabilities, L1 / L2 cache size, memory speed), and ATLAS does not select routines at runtime but at build time, meaning that a default ATLAS build can be badly optimized for a particular processor. The main developer of ATLAS is Clint Whaley. His main priority is optimizing for HPC machines, and he does not give much time to supporting Windows builds. Not surprisingly, ATLAS is difficult to build on Windows, and is not well optimized for Windows 64 bit.

  • By design, the compilation step of ATLAS tunes the output library to the exact architecture on which it is compiling. This means good performance for machines very like the build machine, but worse performance on other machines;
  • No runtime optimization for running CPU;
  • Has only one major developer (Clint Whaley);
  • Compilation is difficult, slow and error-prone on Windows;
  • Not optimized for Windows 64 bit

Because there is no run-time adaptation to the CPU, ATLAS built for a CPU with SSE3 instructions will likely crash on a CPU that does not have SSE3 instructions, and ATLAS built for a SSE2 CPU will not be able to use SSE3 instructions. Therefore, numpy installers on Windows use the “superpack” format, where we build three ATLAS libraries:

  • without CPU support for SSE instructions;
  • with support for SSE2 instructions;
  • with support for SSE3 instructions;

We make three Windows .exe installers, one for each of these ATLAS versions, and then build a “superpack” installer from these three installers, that first checks the machine on which the superpack installer is running, to find what instructions the CPU supports, and then installs the matching numpy / ATLAS package.

There is no way of doing this when installing from binary wheels, because the wheel installation process consists of unpacking files to given destinations, and does not allow pre-install or post-install scripts.

One option would be to build a binary wheel with ATLAS that depends on SSE2 instructions. It seems that 99.5% of Windows machines have SSE2 (see: Windows versions). It is not technically difficult to put a check in the numpy __init__.py file to give a helpful error message and die when the CPU does not have SSE2:

Intel Math Kernel Library¶

The MKL has a reputation for being fast, particularly on Intel chips (see the MKL Wikipedia entry). It has good performance on BLAS / LAPACK benchmarks across the range, except on AMD processors.

It is closed-source, but available for free under the Community licensing program.

The MKL is covered by the Intel Simplified Software License (see the Intel license page). The Simplified Software License does allow us, the developers, to distribute copies of the MKL with our built binaries, where we include their terms of use in our distribution. These include:

Читайте также:  Python windows не закрываться

This clause appears to apply to the users of our binaries, not us, the authors of the binary. This is a change from Intel’s previous MKL license, which required us, the authors, to pay Intel’s legal fees of the user sued Intel.

  • At or near maximum speed;
  • Runtime processor selection, giving good performance on a range of different CPUs.

AMD Core Math Library¶

The ACML was AMD’s equivalent to the MKL, with similar or moderately worse performance. As of time of writing (December 2015), AMD has marked the ACML as “end of life”, and suggests using the AMD compute libraries instead.

The ACML does not appear to contain a CBLAS interface.

Binaries linked against ACML have to conform to the ACML license which, as for the older MKL license, requires software linked to the ACML to subject users to the ACML license terms including:

2. Restrictions. The Software contains copyrighted and patented material, trade secrets and other proprietary material. In order to protect them, and except as permitted by applicable legislation, you may not:

a) decompile, reverse engineer, disassemble or otherwise reduce the Software to a human-perceivable form;

b) modify, network, rent, lend, loan, distribute or create derivative works based upon the Software in whole or in part [. ]

AMD compute libraries¶

AMD advertise the AMD compute libraries (ACL) as the successor to the ACML.

The ACL page points us to BLAS-like instantiation software framework for BLAS and libflame for LAPACK.

BLAS-like instantiation software framework¶

BLIS is “a portable software framework for instantiating high-performance BLAS-like dense linear algebra libraries.”

It provides a superset of BLAS, along with a CBLAS layer.

It can be compiled into a BLAS library. As of writing (December 2015) Windows builds are experimental. BLIS does not currently do run-time hardware detection.

As of December 2015 the developer mailing list was fairly quiet, with only a few emails since August 2015.

  • portable across platforms;
  • modern architecture.
  • Windows builds are experimental;
  • No runtime hardware detection.

libflame¶

libflame is an implementation of some LAPACK routines. See the libflame project page for more detail.

libflame can also be built to include a full LAPACK implementation. It is a sister project to BLIS.

COBLAS¶

COBLAS is a “Reference BLAS library in C99”, BSD license. A quick look at the code in April 2014 suggested it used very straightforward implementations that are not highly optimized.

Netlib reference implementation¶

Most available benchmarks (e.g R benchmarks, BLAS LAPACK review) show the reference BLAS / LAPACK to be considerably slower than any optimized library.

Eigen¶

Eigen is “a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.”

Mostly covered by the Mozilla Public Licence 2, but some features covered by the LGPL. Non-MPL2 features can be disabled

It is technically possible to compile Eigen into a BLAS library, but there is currently no CBLAS interface.

GotoBLAS2¶

GotoBLAS2 is the predecessor to OpenBLAS. It was a library written by Kazushige Goto, and released under a BSD license, but is no longer maintained. Goto now works for Intel. It was at or near the top of benchmarks on which it has been tested (e.g BLAS LAPACK review Eigen benchmarks). Like MKL and ACML, GotoBLAS2 chooses routines at runtime according to the processor. It does not detect modern processors (after 2011).

Читайте также:  Epic browser для linux

OpenBLAS¶

OpenBLAS is a fork of GotoBLAS2 updated for newer processors. It uses the 3-clause BSD license.

Julia uses OpenBLAS by default.

See OpenBLAS on github for current code state. It appears to be actively merging pull requests. There have been some worries about bugs and lack of tests on the numpy mailing list and the octave list.

OpenBLAS on Win32 seems to be quite stable. Some OpenBLAS issues on Win64 can be adressed with a single threaded version of that library.

  • at or near fastest implementation;
  • runtime hardware detection.
  • questions about quality control.

Install blas lapack windows

  • VERSION 1.0 : February 29, 1992
  • VERSION 1.0a : June 30, 1992
  • VERSION 1.0b : October 31, 1992
  • VERSION 1.1 : March 31, 1993
  • VERSION 2.0 : September 30, 1994
  • VERSION 3.0 : June 30, 1999
  • VERSION 3.0 + update : October 31, 1999
  • VERSION 3.0 + update : May 31, 2000
  • VERSION 3.1 : November 2006
  • VERSION 3.1.1 : February 2007
  • VERSION 3.2 : November 2008
  • VERSION 3.2.1 : April 2009
  • VERSION 3.2.2 : June 2010
  • VERSION 3.3.0 : November 2010
  • VERSION 3.3.1 : April 2011
  • VERSION 3.4.0 : November 2011
  • VERSION 3.4.1 : April 2012
  • VERSION 3.4.2 : September 2012
  • VERSION 3.5.0 : November 2013
  • VERSION 3.6.0 : November 2015
  • VERSION 3.6.1 : June 2016
  • VERSION 3.7.0 : December 2016
  • VERSION 3.7.1 : June 2017
  • VERSION 3.8.0 : November 2017

LAPACK is a library of Fortran subroutines for solving the most commonly occurring problems in numerical linear algebra.

LAPACK is a freely-available software package. It can be included in commercial software packages (and has been). We only ask that that proper credit be given to the authors, for example by citing the LAPACK Users’ Guide. The license used for the software is the modified BSD license, see: https://github.com/Reference-LAPACK/lapack/blob/master/LICENSE

Like all software, it is copyrighted. It is not trademarked, but we do ask the following: if you modify the source for these routines we ask that you change the name of the routine and comment the changes made to the original.

We will gladly answer any questions regarding the software. If a modification is done, however, it is the responsibility of the person who modified the routine to provide support.

LAPACK releases are also available on netlib at: http://www.netlib.org/lapack/

The distribution contains (1) the Fortran source for LAPACK, and (2) its testing programs. It also contains (3) the Fortran reference implementation of the Basic Linear Algebra Subprograms (the Level 1, 2, and 3 BLAS) needed by LAPACK. However this code is intended for use only if there is no other implementation of the BLAS already available on your machine; the efficiency of LAPACK depends very much on the efficiency of the BLAS. It also contains (4) CBLAS, a C interface to the BLAS, and (5) LAPACKE, a C interface to LAPACK.

  • LAPACK can be installed with make . The configuration have to be set in the make.inc file. A make.inc.example for a Linux machine running GNU compilers is given in the main directory. Some specific make.inc are also available in the INSTALL directory.
  • LAPACK includes also the CMake build. You will need to have CMake installed on your machine (CMake is available at http://www.cmake.org/). CMake will allow an easy installation on a Windows Machine.
  • Specific information to run LAPACK under Windows is available at http://icl.cs.utk.edu/lapack-for-windows/lapack/.

LAPACK has been thoroughly tested, on many different types of computers. The LAPACK project supports the package in the sense that reports of errors or poor performance will gain immediate attention from the developers. Such reports, descriptions of interesting applications, and other comments should be sent by electronic mail to lapack@icl.utk.edu.

Читайте также:  Wineskin mac os high sierra

For further information on LAPACK please read our FAQ at http://www.netlib.org/lapack/#_faq.

A list of known problems, bugs, and compiler errors for LAPACK is maintained on netlib http://www.netlib.org/lapack/release_notes.html. Please see as well https://github.com/Reference-LAPACK/lapack/issues.

A User forum is also available to help you with the LAPACK library at http://icl.cs.utk.edu/lapack-forum/. You can also contact directly the LAPACK team at lapack@icl.utk.edu.

LAPACK includes a thorough test suite. We recommend that, after compilation, you run the test suite.

For complete information on the LAPACK Testing please consult LAPACK Working Note 41 «Installation Guide for LAPACK».

Установка Windows Scipy: ресурсы Lapack / Blas не найдены

Я пытаюсь установить Python и ряд пакетов на 64-битный рабочий стол Windows 7. Я установил Python 3.4, установил Microsoft Visual Studio C ++ и успешно установил numpy, pandas и некоторые другие. Я получаю следующую ошибку при попытке установить scipy;

Я использую pip install в автономном режиме, я использую команду install is;

Я прочитал посты о том, что требуется компилятор, который, если я правильно понимаю, является компилятором VS C ++. Я использую версию 2010, как я использую Python 3.4. Это сработало для других пакетов.

Должен ли я использовать бинарный файл окна или есть способ заставить установку pip работать?

Большое спасибо за помощь

Решение проблемы отсутствия библиотек BLAS / LAPACK для установок SciPy в 64-битной Windows 7 описано здесь:

Установить Anaconda намного проще, но вы все равно не получите поддержку Intel MKL или GPU, не заплатив за нее (они есть в дополнениях MKL Optimization и Accelerate для Anaconda — я не уверен, что они также используют PLASMA и MAGMA) , С оптимизацией MKL Numpy превзошел IDL в больших матричных вычислениях в 10 раз. MATLAB использует библиотеку Intel MKL для внутреннего использования и поддерживает вычисления на GPU, поэтому можно использовать ее по цене, если вы студент (50 долларов за MATLAB + 10 долларов за Parallel Computing Toolbox). Если вы получаете бесплатную пробную версию Intel Parallel Studio, она поставляется с библиотекой MKL, а также с компиляторами C ++ и FORTRAN, которые пригодятся вам, если вы хотите установить BLAS и LAPACK из MKL или ATLAS в Windows:

Parallel Studio также поставляется с библиотекой Intel MPI, полезной для кластерных вычислительных приложений и их новейших процессоров Xeon. Хотя процесс создания BLAS и LAPACK с оптимизацией MKL не является тривиальным, его преимущества для Python и R довольно велики, как описано на этом вебинаре Intel:

Anaconda и Enthought создали бизнес, сделав эту функциональность и несколько других вещей проще в развертывании. Тем не менее, он находится в свободном доступе для тех, кто хочет сделать немного работы (и немного обучения).

Для тех, кто использует R, теперь вы можете получить оптимизированные MKL BLAS и LAPACK бесплатно с R Open от Revolution Analytics.

РЕДАКТИРОВАТЬ: Anaconda Python теперь поставляется с оптимизацией MKL, а также поддержкой ряда других оптимизаций библиотек Intel через дистрибутив Intel Python. Однако поддержка GPU для Anaconda в библиотеке Accelerate (ранее известной как NumbaPro) по-прежнему превышает 10 тысяч долларов США! Лучшими альтернативами для этого являются, вероятно, PyCUDA и scikit-cuda, так как Copperhead (по сути, бесплатная версия Anaconda Accelerate), к сожалению, прекратил разработку пять лет назад. Это можно найти здесь, если кто-то хочет подобрать, где они остановились.

Оцените статью