Please see my other blog for Oracle EBusiness Suite Posts - EBMentors

Search This Blog

Note: All the posts are based on practical approach avoiding lengthy theory. All have been tested on some development servers. Please don’t test any post on production servers until you are sure.

Thursday, January 25, 2018

Working with Apache Avro to manage Big Data Files

What is Avro?

Apache Avro is a language-neutral data serialization system and is a preferred tool to serialize data in Hadoop. Serialization is the process of translating data structures or objects state into binary or textual form to transport the data over network or to store on some persistent storage. Once the data is transported over network or retrieved from the persistent storage, it needs to be deserialized again.

Avro is not only language independent but also it is schema-based. Avro serializes the data into a compact binary format, which can be deserialized by any application.

Avro uses JSON format to declare the data structures. Presently, it supports languages such as Java, C, C++, C#, Python, and Ruby. It serializes fast and the resulting serialized data is lesser in size (compressible and splittable). Schema is stored along with the Avro data in a file for any further processing. In RPC, the client and the server exchange schemas during the connection.


Download Avro from below link

You can select and download the library for any of the languages provided. In this post, we use Java. Hence download the jar files avro-1.7.7.jar and avro-tools-1.7.7.jar.

Setting Classpath

To work with Avro in Linux environment, download the following jar files, place these jar files to your desired location.


After copying these files into a folder , set the classpath to the folder, in the ./bashrc or .bash_profile file.

[hdpclient@en01 ~]$ echo $CLASSPATH

#class path for Avro
export CLASSPATH=$CLASSPATH:/usr/hadoopsw/avro/*

Creating Avro Schemas

Avro, being a schema-based serialization utility, accepts schemas as input. In spite of various schemas being available, Avro follows its own standards of defining schemas. These schemas describe the following details −
  • type of file (record by default)
  • location of record
  • name of the record
  • fields in the record with their corresponding data types
  • Using these schemas, you can store serialized values in binary format using less space. These values are stored without any metadata.

The Avro schema is created in JavaScript Object Notation (JSON) document format, which is a lightweight text-based data interchange format. It is created in one of the following ways 
  • A JSON string
  • A JSON object
  • A JSON array

The given schema defines a (record type) document within "myns" namespace. The name of document is "emp" which contains two "Fields"

   "type" : "record",
   "namespace" : "myns",
   "name" : "emp",
   "fields" : [
      { "name" : "name" , "type" : "string" },
      { "name" : "age" , "type" : "int" }

We observed that schema contains four attributes, they are briefly described below −

type − Describes document type, in this case a "record".
namespace − Describes the name of the namespace in which the object resides.
name − Describes the schema name.
fields − This is an attribute array which contains the following −
name − Describes the name of field
type − Describes data type of field

Primitive Data Types of Avro
Avro schema is having primitive data types as well as complex data types.

null Null is a type having no value.
int 32-bit signed integer.
long 64-bit signed integer.
float single precision (32-bit) IEEE 754 floating-point number.
double double precision (64-bit) IEEE 754 floating-point number.
bytes sequence of 8-bit unsigned bytes.
string Unicode character sequence.

Complex Data Types of Avro
Along with primitive data types, Avro provides six complex data types namely Records, Enums, Arrays, Maps, Unions, and Fixed.

An enumeration is a list of items in a collection

   "type" : "enum",
   "name" : "Numbers", "namespace": "data", "symbols" : [ "ONE", "TWO", "THREE", "FOUR" ]

name − The value of this field holds the name of the enumeration.
namespace − The value of this field contains the string that qualifies the name of the Enumeration.
symbols − The value of this field holds the enum's symbols as an array of names.

This data type defines an array field having a single attribute items.

{ " type " : " array ", " items " : " int " }

The map data type is an array of key-value pairs.

{"type" : "map", "values" : "int"}

The values attribute holds the data type of the content of map. Avro map values are implicitly taken as strings.


A union datatype is used whenever the field has one or more datatypes. They are represented as JSON arrays. For example, if a field that could be either an int or null, then the union is represented as ["int", "null"].

   "type" : "record", 
   "namespace" : "tutorialspoint", 
   "name" : "empdetails ", 
   "fields" : 
      { "name" : "experience", "type": ["int", "null"] }, { "name" : "age", "type": "int" } 


This data type is used to declare a fixed-sized field that can be used for storing binary data.

{ "type" : "fixed" , "name" : "bdata", "size" : 1048576}

Name holds the name of the field, and size holds the size of the field.

Working Example

After some Avro and its schema/datatype understanding we move forward for a working example as below. I've a Presto Data store with a table "emp". I want to query this table and result should be stored in an avro data file. I need to write small java programs to write and read avro files. I've provided the code which can be modified easily as any specific requirments.

Defining a Schema
Create an Avro schema as shown below and save it as emp.avsc as per your query. I want to query only four column from "emp" table residing in Presto Data Store.

   "namespace": "myns",
   "type": "record",
   "name": "emp",
   "fields": [
      {"name": "empno", "type": "int"},
      {"name": "ename", "type": "string"},
      {"name": "sal", "type": "int"},
      {"name": "comm", "type": "int"}

Compiling the Schema
After creating the Avro schema, we need to compile it using Avro tools. Arvo tools can be located in avro-tools-1.7.7.jar file. We need to provide arvo-tools-1.7.7.jar file path at compilation.

java -jar <path/to/avro-tools-1.7.7.jar> compile schema <path/to/schema-file> <destination-folder>

java -jar /usr/hadoopsw/avro/avro-tools-1.7.7.jar compile schema /usr/hadoopsw/avro/schema/emp.avsc /usr/hadoopsw/avro/gen_code

[hdpclient@en01 ~]$ java -jar /usr/hadoopsw/avro/avro-tools-1.7.7.jar compile schema /usr/hadoopsw/avro/schema/emp.avsc /usr/hadoopsw/avro/gen_code
Input files to compile:

After this compilation, a package is created in the destination directory with the name mentioned as namespace in the schema file. Within this package, the Java source file with schema name is generated. The generated file contains java code corresponding to the schema. This java file can be directly accessed by an application and is useful to create data according to schema.

The generated class contains: 

Default constructor, and parameterized constructor which accept all the variables of the schema.
  • The setter and getter methods for all variables in the schema.
  • Get() method which returns the schema.
  • Builder methods.

Creating and Serializing the Data
First of all, copy the generated java file (with package folder eg; myns) as the result of compiling schema into the current directory (app dir where you will write your java program to create and read avro files)or import it from where it is located.

Now we can write a new Java file and instantiate the class in the generated file (emp) to add employee data to the schema.

Below are the steps to create new java file.

Step 1: Instantiate the generated emp class.

Step 2: Use setter methods to insert data

Step 3: Create an object of DatumWriter interface using the SpecificDatumWriter class. This converts Java objects into in-memory serialized format.

Step 4: Instantiate DataFileWriter for emp (generated) class. This class writes a sequence serialized records of data conforming to a schema, along with the schema itself, in a file. This class requires the DatumWriter object, as a parameter to the constructor. 

Step 5: Open a new file to store the data matching to the given schema using create() method. This method requires the schema, and the path of the file where the data is to be stored, as parameters.

Step 6: Add all the created records to the file using append() method

The following complete program shows how to serialize data into a file using Apache Avro. It reads data from database (ie; Presto in our case) and serializes it.

import java.sql.*; 


import org.apache.avro.file.DataFileWriter;
import org.apache.avro.specific.SpecificDatumWriter;

import myns.*;

public class CreateAvroFile {

    public static void main (String[] args) {
System.out.println ("Simple Avro File Creation Utility");

String query = "select empno,ename,sal,comm from hive.scott.emp";
String avroFile="emp.avsc";

// JDBC driver name and database URL
String JDBC_DRIVER = "com.teradata.presto.jdbc4.Driver";  
String CONNECTION_URL = "jdbc:presto://en01:6060;User=presto;";
//Register JDBC driver

// Open a connection
Connection connection = DriverManager.getConnection(CONNECTION_URL);
System.out.println("Connection Established...");

//Execute a query
Statement stmt = connection.createStatement();
ResultSet rs = stmt.executeQuery(query);

//Instantiate necessary objects for serialization
emp e=new emp(); //Step 1
//Instantiate DatumWriter class
DatumWriter<emp> empDatumWriter = new SpecificDatumWriter<emp>(emp.class); //Step 3
DataFileWriter<emp> empFileWriter = new DataFileWriter<emp>(empDatumWriter); //Step 4
empFileWriter.create(e.getSchema(), new File("/usr/hadoopsw/avro/emp.avro")); //Step 5

      //Extract data from result set and use for serialization
        //Retrieve by column name
         String empno  = rs.getString("empno");
          String ename = rs.getString("ename");
         String sal  = rs.getString("sal");
         String comm  = rs.getString("comm");

         //Display values
String rec = empno+","+ename+","+sal+","+comm;

//Serializing the Data, see file generated by avro compilation

//Creating values according the schema - Step 2
empFileWriter.append(e);  //Step 6

       }//while ends

    //Clean-up environment
    System.out.println("Above data successfully serialized in emp.avro");
}catch(Exception ex){System.out.println(ex.toString());}



Compile and run the utility to test

[hdpclient@en01 avro]$ javac
[hdpclient@en01 avro]$ java CreateAvroFile
Simple Avro File Creation Utility

Connection Established...

{"empno": 7369, "ename": "SMITH", "sal": 800, "comm": 0}
{"empno": 7499, "ename": "ALLEN", "sal": 1600, "comm": 300}
{"empno": 7521, "ename": "WARD", "sal": 1250, "comm": 500}
{"empno": 7566, "ename": "JONES", "sal": 2975, "comm": 0}
{"empno": 7654, "ename": "MARTIN", "sal": 1250, "comm": 1400}
{"empno": 7698, "ename": "BLAKE", "sal": 2850, "comm": 0}
{"empno": 7782, "ename": "CLARK", "sal": 2450, "comm": 0}
{"empno": 7788, "ename": "SCOTT", "sal": 3000, "comm": 0}
{"empno": 7839, "ename": "KING", "sal": 5000, "comm": 0}
{"empno": 7844, "ename": "TURNER", "sal": 1500, "comm": 0}
{"empno": 7876, "ename": "ADAMS", "sal": 1100, "comm": 0}
{"empno": 7900, "ename": "JAMES", "sal": 950, "comm": 0}
{"empno": 7902, "ename": "FORD", "sal": 3000, "comm": 0}
{"empno": 7934, "ename": "MILLER", "sal": 1300, "comm": 0}

Above data successfully serialized in emp.avro

Deserialization by Generating a Class
One can read an Avro schema into a program either by generating a class corresponding to the schema or by using the parsers library. 

For the purpose of this post, I read the schema by generating a class and Deserialize the data using Avro. The procedure is as follows

Step 1: Create an object of DatumReader interface using SpecificDatumReader class.

Step 2: Instantiate DataFileReader class. This class reads serialized data from a file. It requires the DatumReader object, and path of the file (emp.avro) where the serialized data is existing , as a parameters to the constructor.

Step 3: Print the deserialized data, using the methods of DataFileReader.

The following complete program shows how to deserialize the data in a file using Avro.


import org.apache.avro.file.DataFileReader;
import org.apache.avro.specific.SpecificDatumReader;

import myns.*;

public class ReadAvroFile {
   public static void main(String args[]) throws IOException{
      //DeSerializing the objects
      DatumReader<emp> empDatumReader = new SpecificDatumReader<emp>(emp.class); //Step 1
      //Instantiating DataFileReader
      DataFileReader<emp> dataFileReader = new DataFileReader<emp>(new
         File("/usr/hadoopsw/avro/emp.avro"), empDatumReader); //Step 2
      emp em=null;
      while(dataFileReader.hasNext()){ //Step 3


Compile and test deserialization

[hdpclient@en01 avro]$ javac
[hdpclient@en01 avro]$ java ReadAvroFile
{"empno": 7369, "ename": "SMITH", "sal": 800, "comm": 0}
{"empno": 7499, "ename": "ALLEN", "sal": 1600, "comm": 300}
{"empno": 7521, "ename": "WARD", "sal": 1250, "comm": 500}
{"empno": 7566, "ename": "JONES", "sal": 2975, "comm": 0}
{"empno": 7654, "ename": "MARTIN", "sal": 1250, "comm": 1400}
{"empno": 7698, "ename": "BLAKE", "sal": 2850, "comm": 0}
{"empno": 7782, "ename": "CLARK", "sal": 2450, "comm": 0}
{"empno": 7788, "ename": "SCOTT", "sal": 3000, "comm": 0}
{"empno": 7839, "ename": "KING", "sal": 5000, "comm": 0}
{"empno": 7844, "ename": "TURNER", "sal": 1500, "comm": 0}
{"empno": 7876, "ename": "ADAMS", "sal": 1100, "comm": 0}
{"empno": 7900, "ename": "JAMES", "sal": 950, "comm": 0}
{"empno": 7902, "ename": "FORD", "sal": 3000, "comm": 0}
{"empno": 7934, "ename": "MILLER", "sal": 1300, "comm": 0}

 Using Avro Tools

Avro provides a set of tools for working with Avro data files and schemas. Below are some examples.

Running without any command line parameter shows help

[hdpclient@en01 avro]$ java -jar avro-tools-1.7.7.jar
Version 1.7.7 of Apache Avro
Copyright 2010 The Apache Software Foundation

This product includes software developed at
The Apache Software Foundation (

C JSON parsing provided by Jansson and
written by Petri Lehtinen. The original software is
available from
Available tools:
          cat  extracts samples from files
      compile  Generates Java code for the given schema.
       concat  Concatenates avro files without re-compressing.
   fragtojson  Renders a binary-encoded Avro datum as JSON.
     fromjson  Reads JSON records and writes an Avro data file.
     fromtext  Imports a text file into an avro data file.
      getmeta  Prints out the metadata of an Avro data file.
    getschema  Prints out schema of an Avro data file.
          idl  Generates a JSON schema from an Avro IDL file
 idl2schemata  Extract JSON schemata of the types from an Avro IDL file
       induce  Induce schema/protocol from Java class/interface via reflection.
   jsontofrag  Renders a JSON-encoded Avro datum as binary.
       random  Creates a file with randomly generated instances of a schema.
      recodec  Alters the codec of a data file.
  rpcprotocol  Output the protocol of a RPC service
   rpcreceive  Opens an RPC Server and listens for one message.
      rpcsend  Sends a single RPC message.
       tether  Run a tethered mapreduce job.
       tojson  Dumps an Avro data file as JSON, record per line or pretty.
       totext  Converts an Avro data file to a text file.
     totrevni  Converts an Avro data file to a Trevni file.
  trevni_meta  Dumps a Trevni file's metadata as JSON.
trevni_random  Create a Trevni file filled with random instances of a schema.
trevni_tojson  Dumps a Trevni file as JSON.

1. Dump the file’s header key-value metadata

[hdpclient@en01 avro]$ java -jar avro-tools-1.7.7.jar getmeta emp.avro

avro.schema     {"type":"record","name":"emp","namespace":"myns","fields":[{"name":"empno","type":"int"},{"name":"ename","type":"string"},{"name":"sal","type":"int"},{"name":"comm","type":"int"}]}

2. Dump the file’s schema

[hdpclient@en01 avro]$ java -jar avro-tools-1.7.7.jar getschema emp.avro

  "type" : "record",
  "name" : "emp",
  "namespace" : "myns",
  "fields" : [ {
    "name" : "empno",
    "type" : "int"
  }, {
    "name" : "ename",
    "type" : "string"
  }, {
    "name" : "sal",
    "type" : "int"
  }, {
    "name" : "comm",
    "type" : "int"
  } ]

3. Dump the content of an Avro data file as JSON

[hdpclient@en01 avro]$ java -jar avro-tools-1.7.7.jar tojson emp.avro | tail


4. Merge Avro files

jar avro-tools.jar concat /input/part* /output/bigfile.avro

[hdpclient@te1-hdp-rp-en01 avro]$ java -jar avro-tools-1.7.7.jar concat /data/hdfsloc/tmp/avroTestData/000000* /data/hdfsloc/tmp/avroTestData/empBigAvroFile

No comments: