A non-relational (NoSQL) database that runs on top of HDFS
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large datasets.
HBase scales linearly to handle huge data sets with billions of rows and millions of columns, and it easily combines data sources that use a wide variety of different structures and schemas. HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
Apache HBase provides random, real time access to your data in Hadoop. It was created for hosting very large tables, making it a great choice to store multi-structured or sparse data. Users can query HBase for a particular point in time, making “flashback” queries possible. These following characterisitcs make HBase a great choice for storing semi-structured data like log data and then providing that data very quickly to users or applications integrated with HBase.
Enterprises use Apache HBase’s low latency storage for scenarios that require real-time analysis and tabular data for end user applications. One company that provides web security services maintains a system accepting billions of event traces and activity logs from its customer’ desktops every day. The company’s programmers can tightly integrate their security solutions with HBase (to assure that the protection they provide keeps pace with real-time changes in the threat landscape.)
Another company provides stock market ticker plant data that its users query more than thirty thousand times per second, with an SLA of only a few milliseconds. Apache HBase provides that super low-latency access over an enormous, rapidly changing data store.
HBase scales linearly by requiring all tables to have a primary key. The key space is divided into sequential blocks that are then allotted to a region. RegionServers own one or more regions, so the load is spread uniformly across the cluster. If the keys within a region are frequently accessed, HBase can further subdivide the region by splitting it automatically, so that manual data sharding is not necessary.
ZooKeeper and HMaster servers make information about the cluster topology available to clients. Clients connect to these and download a list of RegionServers, the regions contained within those RegionServers and the key ranges hosted by the regions. Clients know exactly where any piece of data is in HBase and can contact the RegionServer directly without any need for a central coordinator.
RegionServers include a memstore to cache frequently accessed rows in memory. Optionally, users can store rows off-heap, caching gigabytes of data while minimizing pauses for garbage collection.
Apache HBase provides high availability in several ways:
As HBase evolves, the community is working on continued improvements to its performance, integration options and developer accessibility.
|Performance||By taking advantage of emerging technologies like HDFS heterogeneous storage and more effective use of RAM|
|Integration||Support for streaming technologies including Apache Storm and Spark Streaming|
|Developer access||From a variety of development environments, including Java, .NET, and Python|
This table summarizes recent innovation in Apache HBase.
|Updates in in HDP 2.6||
|Updates in in HDP 2.5||
|Apache HBase trunk, used in HDP 2.2||
Introduction R is a popular tool for statistics and data analysis. It has rich visualization capabilities and a large collection of libraries that have been developed and maintained by the R developer community. One drawback to R is that it’s designed to run on in-memory data, which makes it unsuitable for large datasets. Spark is […]
多くのお客様から非常によくいただくリクエストは、たとえばスキャンした PNG ファイルのテキストなど、画像ファイル中でテキストをインデックスすることです。このチュートリアルでは、それを SOLR を使って行う方法を段階的に説明します。前提条件：Hortonworks Sandbox がダウンロードされていること、「HDP Sandbox のコツを学ぶ」のチュートリアルを完了していること。ステップバイステップ・ガイド […]
Introduction In this tutorial, you will learn about the different features available in the HDF sandbox. HDF stands for Hortonworks DataFlow. HDF was built to make processing data-in-motion an easier task while also directing the data from source to the destination. You will learn about quick links to access these tools that way when you […]
はじめに：JReport は、Apache Hive の JDBC ドライバを使用して Hortonworks Data Platform 2.3 からデータを簡単に抽出し可視化することができる、組み込み BI レポーティングツールです。レポート、ダッシュボード、データ分析を作成することが可能で、後で自分のアプリケーションに組み込むこともできます。このチュートリアルでは、次のステップをご説明します[...]
Apache Zeppelin on HDP 2.4.2 Author: Vinay Shukla In March 2016 we delivered the second technical preview of Apache Zeppelin, on HDP 2.4. Meanwhile we and the Zeppelin community have continued to add new features to Zeppelin. These features are now available in the final technical preview of Apache Zeppelin. This technical preview works with […]
Introduction Hadoop has always been associated with BigData, yet the perception is it’s only suitable for high latency, high throughput queries. With the contribution of the community, you can use Hadoop interactively for data exploration and visualization. In this tutorial you’ll learn how to analyze large datasets using Apache Hive LLAP on Amazon Web Services […]
Introduction The Azure cloud infrastructure has become a common place for users to deploy virtual machines on the cloud due to its flexibility, ease of deployment, and cost benefits. Microsoft has expanded Azure to include a marketplace with thousands of certified, open source, and community software applications and developer services, pre-configured for Microsoft Azure. This […]
The Hortonworks Sandbox is delivered as a Dockerized container with the most common ports already opened and forwarded for you. If you would like to open even more ports, check out this tutorial.
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, HAWQ, Zeppelin, Atlas, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.