This work explores the feasibility of specialized hardware implementing the Cortical Learning Algorithm (CLA) in order to fully exploit its inherent advantages. This algorithm, which is inspired by the current understanding of the mammalian neo-cortex, is the basis of the Hierarchical Temporal Memory (HTM). In contrast to other machine learning (ML) approaches, the structure is not application dependent and relies on fully unsupervised continuous learning. We hypothesize that a hardware implementation will be able not only to extend the existing practical uses of these ideas to broader scenarios but also to exploit CLA’s hardware-friendly characteristics. The architecture proposed will enable the system size to be scaled up compared to a state-of-the-art CLA software implementation. It may be possible to improve performance by 4 orders of magnitude and energy efficiency by up to 8 orders of magnitude.