Skip to main content
Version: 0.5.1

Apache Hive catalog

Introduction

Gravitino offers the capability to utilize Apache Hive as a catalog for metadata management.

Requirements and limitations

  • The Hive catalog requires a Hive Metastore Service (HMS), or a compatible implementation of the HMS, such as AWS Glue.
  • Gravitino must have network access to the Hive metastore service using the Thrift protocol.
note

The Hive catalog is available for Apache Hive 2.x only. Support for Apache Hive 3.x is under development.

Catalog

Catalog capabilities

The Hive catalog supports creating, updating, and deleting databases and tables in the HMS.

Catalog properties

Property NameDescriptionDefault ValueRequiredSince Version
metastore.urisThe Hive metastore service URIs, separate multiple addresses with commas. Such as thrift://127.0.0.1:9083(none)Yes0.2.0
client.pool-sizeThe maximum number of Hive metastore clients in the pool for Gravitino.1No0.2.0
gravitino.bypass.Property name with this prefix passed down to the underlying HMS client for use. Such as gravitino.bypass.hive.metastore.failure.retries = 3 indicate 3 times of retries upon failure of Thrift metastore calls(none)No0.2.0
client.pool-cache.eviction-interval-msThe cache pool eviction interval.300000No0.4.0
impersonation-enableEnable user impersonation for Hive catalog.falseNo0.4.0
kerberos.principalThe Kerberos principal for the catalog. You should configure gravitino.bypass.hadoop.security.authentication, gravitino.bypass.hive.metastore.kerberos.principal and gravitino.bypass.hive.metastore.sasl.enabledif you want to use Kerberos.(none)required if you use kerberos0.4.0
kerberos.keytab-uriThe uri of key tab for the catalog. Now supported protocols are https, http, ftp, file.(none)required if you use kerberos0.4.0
kerberos.check-interval-secThe interval to check validness of the principal60No0.4.0
kerberos.keytab-fetch-timeout-secThe timeout to fetch key tab60No0.4.0

When you use the Gravitino with Trino. You can pass the Trino Hive connector configuration using prefix trino.bypass.. For example, using trino.bypass.hive.config.resources to pass the hive.config.resources to the Gravitino Hive catalog in Trino runtime.

When you use the Gravitino with Spark. You can pass the Spark Hive connector configuration using prefix spark.bypass.. For example, using spark.bypass.hive.exec.dynamic.partition.mode to pass the hive.exec.dynamic.partition.mode to the Spark Hive connector in Spark runtime.

Catalog operations

Refer to Manage Relational Metadata Using Gravitino for more details.

Schema

Schema capabilities

The Hive catalog supports creating, updating, and deleting databases in the HMS.

Schema properties

Schema properties supply or set metadata for the underlying Hive database. The following table lists predefined schema properties for the Hive database. Additionally, you can define your own key-value pair properties and transmit them to the underlying Hive database.

Property nameDescriptionDefault valueRequiredSince Version
locationThe directory for Hive database storage, such as /user/hive/warehouse.HMS uses the value of hive.metastore.warehouse.dir in the hive-site.xml by default.No0.1.0

Schema operations

see Manage Relational Metadata Using Gravitino.

Table

Table capabilities

  • The Hive catalog supports creating, updating, and deleting tables in the HMS.
  • Doesn't support column default value.

Table partitions

The Hive catalog supports partitioned tables. Users can create partitioned tables in the Hive catalog with the specific partitioning attribute. Although Gravitino supports several partitioning strategies, Apache Hive inherently only supports a single partitioning strategy (partitioned by column). Therefore, the Hive catalog only supports Identity partitioning.

caution

The fieldName specified in the partitioning attribute must be the name of a column defined in the table.

Table sort orders and distributions

The Hive catalog supports bucketed sorted tables. Users can create bucketed sorted tables in the Hive catalog with specific distribution and sortOrders attributes. Although Gravitino supports several distribution strategies, Apache Hive inherently only supports a single distribution strategy (clustered by column). Therefore the Hive catalog only supports Hash distribution.

caution

The fieldName specified in the distribution and sortOrders attribute must be the name of a column defined in the table.

Table column types

The Hive catalog supports all data types defined in the Hive Language Manual. The following table lists the data types mapped from the Hive catalog to Gravitino.

Hive Data TypeGravitino Data TypeSince Version
booleanboolean0.2.0
tinyintbyte0.2.0
smallintshort0.2.0
int/integerinteger0.2.0
bigintlong0.2.0
floatfloat0.2.0
double/double precisiondouble0.2.0
decimaldecimal0.2.0
stringstring0.2.0
charchar0.2.0
varcharvarchar0.2.0
timestamptimestamp0.2.0
datedate0.2.0
interval_year_monthinterval_year0.2.0
interval_day_timeinterval_day0.2.0
binarybinary0.2.0
arrayarray0.2.0
mapmap0.2.0
structstruct0.2.0
uniontypeuniontype0.2.0
info

Since 0.6.0, the data types other than listed above are mapped to Gravitino External Type that represents an unresolvable data type from the Hive catalog.

Table properties

Table properties supply or set metadata for the underlying Hive tables. The following table lists predefined table properties for a Hive table. Additionally, you can define your own key-value pair properties and transmit them to the underlying Hive database.

Property NameDescriptionDefault ValueRequiredSince version
locationThe location for table storage, such as /user/hive/warehouse/test_table.HMS uses the database location as the parent directory by default.No0.2.0
table-typeType of the table. Valid values include MANAGED_TABLE and EXTERNAL_TABLE.MANAGED_TABLENo0.2.0
formatThe table file format. Valid values include TEXTFILE, SEQUENCEFILE, RCFILE, ORC, PARQUET, AVRO, JSON, CSV, and REGEX.TEXTFILENo0.2.0
input-formatThe input format class for the table, such as org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.The property format sets the default value org.apache.hadoop.mapred.TextInputFormat and can change it to a different default.No0.2.0
output-formatThe output format class for the table, such as org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.The property format sets the default value org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat and can change it to a different default.No0.2.0
serde-libThe serde library class for the table, such as org.apache.hadoop.hive.ql.io.orc.OrcSerde.The property format sets the default value org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe and can change it to a different default.No0.2.0
serde.parameter.The prefix of the serde parameter, such as "serde.parameter.orc.create.index" = "true", indicating ORC serde lib to create row indexes(none)No0.2.0

Hive automatically adds and manages some reserved properties. Users aren't allowed to set these properties.

Property NameDescriptionSince Version
commentUsed to store a table comment.0.2.0
numFilesUsed to store the number of files in the table.0.2.0
totalSizeUsed to store the total size of the table.0.2.0
EXTERNALIndicates whether the table is external.0.2.0
transient_lastDdlTimeUsed to store the last DDL time of the table.0.2.0

Table indexes

  • Doesn't support table indexes.

Table operations

Refer to Manage Relational Metadata Using Gravitino for more details.

Alter operations

Gravitino has already defined a unified set of metadata operation interfaces, and almost all Hive Alter operations have corresponding table update requests which enable you to change the struct of an existing table. The following table lists the mapping relationship between Hive Alter operations and Gravitino table update requests.

Alter table
Hive Alter OperationGravitino Table Update RequestSince Version
Rename TableRename table0.2.0
Alter Table PropertiesSet a table property0.2.0
Alter Table CommentUpdate comment0.2.0
Alter SerDe PropertiesSet a table property0.2.0
Remove SerDe PropertiesRemove a table property0.2.0
Alter Table Storage PropertiesUnsupported-
Alter Table Skewed or Stored as DirectoriesUnsupported-
Alter Table ConstraintsUnsupported-
note

As Gravitino has a separate interface for updating the comment of a table, the Hive catalog sets comment as a reserved property for the table, preventing users from setting the comment property. Apache Hive can modify the comment property of the table.

Alter column
Hive Alter OperationGravitino Table Update RequestSince Version
Change Column NameRename a column0.2.0
Change Column TypeUpdate the type of a column0.2.0
Change Column PositionUpdate the position of a column0.2.0
Change Column CommentUpdate the column comment0.2.0
Alter partition
note

Support for altering partitions is under development.