A sorted, distributed key-value store with cell-based access control
Accumulo is a low-latency, large table data storage and retrieval system with cell-level security. Accumulo is based on Google’s Bigtable and it runs on YARN, the data operating system of Hadoop. YARN provides visualization and analysis applications predictable access to data in Accumulo.
Accumulo was originally developed at the National Security Agency, before it was contributed to the Apache Software Foundation as an open-source incubation project. Due to its origins in the intelligence community, Accumulo provides extremely fast access to data in massive tables, while also controlling access to its billions of rows and millions of columns down to the individual cell. This is known as fine-grained data access control.
Cell-level access control is important for organizations with complex policies governing who is allowed to see data. It enables the intermingling of different data sets with access control policies for fine-grained access to data sets that have some sensitive elements. Those with permission to see sensitive data can work alongside co-worker without those privileges. Both users can access data in accordance with their permissions.
Without Accumulo, those policies are difficult to enforce systematically, but Accumulo encodes those rules for each individual data cell and controls fine-grained access.
Here is a list of some of Apache Accumulo’s most important features:
|Table design and configuration||
|Integrity and availability||
Accumulo stores sorted key-value pairs. Sorting data by key allows rapid lookups of individual keys or scans over a range of keys. Since data is retrieved by key, the keys should contain the information that will be used to do the lookup.
The values may contain anything since they are not used for retrieval.
The original Big Table design has a row and column paradigm. Accumulo extends the column with an additional “visibility” label that provides the fine-grained access control.
Accumulo is written in Java, but a thrift proxy allows users to interact with Accumulo using C++, Python or Ruby. A pluggable user-authentication system allows LDAP connections to Accumulo. An HDFS class loader loads JARs from Hadoop Distributed File System (HDFS) to multiple servers. Accumulo also has connectors with other Apache projects such as Hive and Pig.
Try out the tutorial Analyzing Graph Data with Sqrrl and HDP
The Apache Accumulo community is working on these improvements:
Introduction Hadoop has always been associated with BigData, yet the perception is it’s only suitable for high latency, high throughput queries. With the contribution of the community, you can use Hadoop interactively for data exploration and visualization. In this tutorial you’ll learn how to analyze large datasets using Apache Hive LLAP on Amazon Web Services […]
多くのお客様から非常によくいただくリクエストは、たとえばスキャンした PNG ファイルのテキストなど、画像ファイル中でテキストをインデックスすることです。このチュートリアルでは、それを SOLR を使って行う方法を段階的に説明します。前提条件：Hortonworks Sandbox がダウンロードされていること、「HDP Sandbox のコツを学ぶ」のチュートリアルを完了していること。ステップバイステップ・ガイド […]
Introduction In this tutorial, you will learn about the different features available in the HDF sandbox. HDF stands for Hortonworks DataFlow. HDF was built to make processing data-in-motion an easier task while also directing the data from source to the destination. You will learn about quick links to access these tools that way when you […]
はじめに：JReport は、Apache Hive の JDBC ドライバを使用して Hortonworks Data Platform 2.3 からデータを簡単に抽出し可視化することができる、組み込み BI レポーティングツールです。レポート、ダッシュボード、データ分析を作成することが可能で、後で自分のアプリケーションに組み込むこともできます。このチュートリアルでは、次のステップをご説明します[...]
The Hortonworks Sandbox is delivered as a Dockerized container with the most common ports already opened and forwarded for you. If you would like to open even more ports, check out this tutorial.
Introduction R is a popular tool for statistics and data analysis. It has rich visualization capabilities and a large collection of libraries that have been developed and maintained by the R developer community. One drawback to R is that it’s designed to run on in-memory data, which makes it unsuitable for large datasets. Spark is […]
Apache Zeppelin on HDP 2.4.2 Author: Vinay Shukla In March 2016 we delivered the second technical preview of Apache Zeppelin, on HDP 2.4. Meanwhile we and the Zeppelin community have continued to add new features to Zeppelin. These features are now available in the final technical preview of Apache Zeppelin. This technical preview works with […]
Welcome to the Hortonworks Sandbox! Look at the attached sections for sandbox documentation.
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, Nifi Registry, HAWQ, Zeppelin, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.