You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Heikki Linnakangas c9199036ab Fix broken test for old gp_bloat_diag issue. 9 hours ago
.github CONTRIBUTING.md: Add guidelines to run pgindent 2 years ago
concourse Setup GPHOME_LOADERS environment and script (#10107) 1 week ago
config Stop deriving C++ flags from CFLAGS. 3 weeks ago
contrib Use foreign data wraper routines to replace external insert in COPY FROM. (#10169) 1 week ago
doc Clean up reference SGML files for GPDB-added DDL commands. 2 months ago
gpAux Setup GPHOME_LOADERS environment and script (#10107) 1 week ago
gpMgmt Fix possible build and install errors 14 hours ago
gpcontrib Use foreign data wraper routines to replace external insert in COPY FROM. (#10169) 1 week ago
gpdb-doc docs - gpinitsystem -I format addition (#10167) 1 day ago
hooks concourse git-hook: check for pipeline generation (#4428) 2 years ago
src Fix broken test for old gp_bloat_diag issue. 9 hours ago
.dir-locals.el emacs: Set indent-tabs-mode in perl-mode 5 years ago
.editorconfig Merging Orca .editorconfig into gpdb file 1 month ago
.gitattributes Fix whitespace and remove obsolete gitattributes entry 4 years ago
.gitignore Remove the pg_exttable catalog 1 week ago
.gitmodules Remove unused postgis extension from gpAux 1 year ago
.mailmap Add Fenggang Wang to the mailmap registry 2 months ago
.travis.yml Build ORCA with C++14: Take Two (#10068) 1 week ago
COPYRIGHT Merge with PostgreSQL 9.6 (up to a point between 9.6beta4 and 9.6rc1). 6 months ago
GNUmakefile.in Remove the gpperfmon (#9553) 3 months ago
HISTORY Change documentation references to PG website to use https: not http: 3 years ago
LICENSE Attempt to make LICENSE discoverable by Github 3 years ago
Makefile Merge with PostgreSQL 9.4 STABLE (tag: 9.4.20) 1 year ago
NOTICE Remove alert sending support via email and SNMP 1 year ago
PULL_REQUEST_TEMPLATE.md Review a PR when create a PR 1 year ago
README.CentOS.bash Add libzstd to CentOS dependencies README 1 year ago
README.PostgreSQL Merge with PostgreSQL 9.4 STABLE (tag: 9.4.20) 1 year ago
README.amazon_linux Remove references to paramiko, ecdsa, and pycrypto 1 year ago
README.conda.md Do not prefer libedit over readline for tab completion and history 8 months ago
README.docker.md Rename centos images for building and testing 1 year ago
README.git Change documentation references to PG website to use https: not http: 3 years ago
README.linux.md Update README instructions on how to build/test with ORCA 1 month ago
README.macOS.bash Remove the gpperfmon (#9553) 3 months ago
README.macOS.md Update README instructions on how to build/test with ORCA 1 month ago
README.md Add arm regression test badge into README 1 month ago
README.ubuntu.bash Remove references to paramiko, ecdsa, and pycrypto 1 year ago
README.windows.md Add README for building client tools on Windows 1 year ago
aclocal.m4 Merge with PostgreSQL 9.6 (up to a point between 9.6beta4 and 9.6rc1). 6 months ago
configure Let configure set C++14 mode (#10147) 1 week ago
configure.in Let configure set C++14 mode (#10147) 1 week ago
getversion Avoid hardcoded bash location in mgmt utils 1 year ago
logo-greenplum.png Move image used in top level README to top level 1 year ago
putversion Import Greenplum source code. 4 years ago
python-dependencies.txt Remove references to paramiko, ecdsa, and pycrypto 1 year ago
python-developer-dependencies.txt Fix some steps of building Greenplum with GPORCA 1 year ago

README.md

Concourse Pipeline Concourse Build Status |
Travis Build Travis Build Status |
Zuul Regression Test On Arm Zuul Regression Test Status


Greenplum

Greenplum Database (GPDB) is an advanced, fully featured, open
source data warehouse, based on PostgreSQL. It provides powerful and rapid analytics on
petabyte scale data volumes. Uniquely geared toward big data
analytics, Greenplum Database is powered by the world’s most advanced
cost-based query optimizer delivering high analytical query
performance on large data volumes.

The Greenplum project is released under the Apache 2
license
. We want to thank
all our past and present community contributors and are really interested in
all new potential contributions. For the Greenplum Database community
no contribution is too small, we encourage all types of contributions.

Overview

A Greenplum cluster consists of a master server, and multiple
segment servers. All user data resides in the segments, the master
contains only metadata. The master server, and all the segments, share
the same schema.

Users always connect to the master server, which divides up the query
into fragments that are executed in the segments, and collects the results.

More information can be found on the project website.

Building Greenplum Database with GPORCA

GPORCA is a cost-based optimizer which is used by Greenplum Database in
conjunction with the PostgreSQL planner. It is also known as just ORCA, and
Pivotal Optimizer. The code for GPORCA resides src/backend/gporca. It is built
automatically by default.

Installing dependencies (for macOS developers)

Follow these macOS steps for getting your system ready for GPDB

Installing dependencies (for Linux developers)

Follow appropriate linux steps for getting your system ready for GPDB

Build the database

# Configure build environment to install at /usr/local/gpdb
./configure --with-perl --with-python --with-libxml --with-gssapi --prefix=/usr/local/gpdb

# Compile and install
make -j8
make -j8 install

# Bring in greenplum environment into your running shell
source /usr/local/gpdb/greenplum_path.sh

# Start demo cluster
make create-demo-cluster
# (gpdemo-env.sh contains __PGPORT__ and __MASTER_DATA_DIRECTORY__ values)
source gpAux/gpdemo/gpdemo-env.sh

The directory and the TCP ports for the demo cluster can be changed on the fly.
Instead of make cluster, consider:

DATADIRS=/tmp/gpdb-cluster PORT_BASE=5555 make cluster

The TCP port for the regression test can be changed on the fly:

PGPORT=5555 make installcheck-world

To turn GPORCA off and use Postgres planner for query optimization:

set optimizer=off;

If you want to clean all generated files

make distclean

Running tests

  • The default regression tests
make installcheck-world
  • The top-level target installcheck-world will run all regression
    tests in GPDB against the running cluster. For testing individual
    parts, the respective targets can be run separately.

  • The PostgreSQL check target does not work. Setting up a
    Greenplum cluster is more complicated than a single-node PostgreSQL
    installation, and no-one’s done the work to have make check
    create a cluster. Create a cluster manually or use gpAux/gpdemo/
    (example below) and run the toplevel make installcheck-world
    against that. Patches are welcome!

  • The PostgreSQL installcheck target does not work either, because
    some tests are known to fail with Greenplum. The
    installcheck-good schedule in src/test/regress excludes those
    tests.

  • When adding a new test, please add it to one of the GPDB-specific tests,
    in greenplum_schedule, rather than the PostgreSQL tests inherited from the
    upstream. We try to keep the upstream tests identical to the upstream
    versions, to make merging with newer PostgreSQL releases easier.

Alternative Configurations

Building GPDB without GPORCA

Currently, GPDB is built with GPORCA by default. If you want to build GPDB
without GPORCA, configure requires --disable-orca flag to be set.

# Clean environment
make distclean

# Configure build environment to install at /usr/local/gpdb
./configure --disable-orca --with-perl --with-python --with-libxml --prefix=/usr/local/gpdb

Building GPDB with PXF

PXF is an extension framework for GPDB to enable fast access to external hadoop datasets.
Refer to PXF extension for more information.

Currently, GPDB is built with PXF by default (--enable-pxf is on).
In order to build GPDB without pxf, simply invoke ./configure with additional option --disable-pxf.
PXF requires curl, so --enable-pxf is not compatible with the --without-libcurl option.

Building GPDB with Python3 enabled

GPDB supports Python3 with plpython3u UDF

See how to enable Python3 for details.

Building GPDB client tools on Windows

See Building GPDB client tools on Windows for details.

Development with Docker

See README.docker.md.

We provide a docker image with all dependencies required to compile and test
GPDB (See Usage). You can view the dependency dockerfile at ./src/tools/docker/centos6-admin/Dockerfile.
The image is hosted on docker hub at pivotaldata/gpdb-dev:centos6-gpadmin.

A quickstart guide to Docker can be found on the Pivotal Engineering Journal.

Development with Vagrant

There is a Vagrant-based quickstart guide for developers.

Code layout

The directory layout of the repository follows the same general layout
as upstream PostgreSQL. There are changes compared to PostgreSQL
throughout the codebase, but a few larger additions worth noting:

  • gpMgmt/

    Contains Greenplum-specific command-line tools for managing the
    cluster. Scripts like gpinit, gpstart, gpstop live here. They are
    mostly written in Python.

  • gpAux/

    Contains Greenplum-specific release management scripts, and vendored
    dependencies. Some additional directories are submodules and will be
    made available over time.

  • gpcontrib/

    Much like the PostgreSQL contrib/ directory, this directory contains
    extensions such as gpfdist, PXF and gpmapreduce which are Greenplum-specific.

  • doc/

    In PostgreSQL, the user manual lives here. In Greenplum, the user
    manual is maintained separately and only the reference pages used
    to build man pages are here.

  • gpdb-doc/

    Contains the Greenplum documentation in DITA XML format. Refer to
    gpdb-doc/README.md for information on how to build, and work with
    the documentation.

  • ci/

    Contains configuration files for the GPDB continuous integration system.

  • src/backend/cdb/

    Contains larger Greenplum-specific backend modules. For example,
    communication between segments, turning plans into parallelizable
    plans, mirroring, distributed transaction and snapshot management,
    etc. cdb stands for Cluster Database - it was a workname used in
    the early days. That name is no longer used, but the cdb prefix
    remains.

  • src/backend/gpopt/

    Contains the so-called translator library, for using the GPORCA
    optimizer with Greenplum. The translator library is written in C++
    code, and contains glue code for translating plans and queries
    between the DXL format used by GPORCA, and the PostgreSQL internal
    representation.

  • src/backend/gporca/

    Contains the GPORCA optimizer code and tests. This is written in C++. See
    README.md for more information and how to
    unit-test GPORCA.

  • src/backend/fts/

    FTS is a process that runs in the master node, and periodically
    polls the segments to maintain the status of each segment.

Contributing

Greenplum is maintained by a core team of developers with commit rights to the
main gpdb repository on GitHub. At the
same time, we are very eager to receive contributions from anybody in the wider
Greenplum community. This section covers all you need to know if you want to see
your code or documentation changes be added to Greenplum and appear in the
future releases.

Getting started

Greenplum is developed on GitHub, and anybody wishing to contribute to it will
have to have a GitHub account and be familiar
with Git tools and workflow.
It is also recommend that you follow the developer’s mailing list
since some of the contributions may generate more detailed discussions there.

Once you have your GitHub account, fork
this repository so that you can have your private copy to start hacking on and to
use as source of pull requests.

Anybody contributing to Greenplum has to be covered by either the Corporate or
the Individual Contributor License Agreement. If you have not previously done
so, please fill out and submit the Contributor License Agreement.
Note that we do allow for really trivial changes to be contributed without a
CLA if they fall under the rubric of obvious fixes.
However, since our GitHub workflow checks for CLA by default you may find it
easier to submit one instead of claiming an “obvious fix” exception.

Licensing of Greenplum contributions

If the contribution you’re submitting is original work, you can assume that Pivotal
will release it as part of an overall Greenplum release available to the downstream
consumers under the Apache License, Version 2.0. However, in addition to that, Pivotal
may also decide to release it under a different license (such as PostgreSQL License to the upstream consumers that require it. A typical example here would be Pivotal
upstreaming your contribution back to PostgreSQL community (which can be done either
verbatim or your contribution being upstreamed as part of the larger changeset).

If the contribution you’re submitting is NOT original work you have to indicate the name
of the license and also make sure that it is similar in terms to the Apache License 2.0.
Apache Software Foundation maintains a list of these licenses under Category A. In addition to that, you may be required to make proper attribution in the
NOTICE file similar to these examples.

Finally, keep in mind that it is NEVER a good idea to remove licensing headers from
the work that is not your original one. Even if you are using parts of the file that
originally had a licensing header at the top you should err on the side of preserving it.
As always, if you are not quite sure about the licensing implications of your contributions,
feel free to reach out to us on the developer mailing list.

Coding guidelines

Your chances of getting feedback and seeing your code merged into the project
greatly depend on how granular your changes are. If you happen to have a bigger
change in mind, we highly recommend engaging on the developer’s mailing list
first and sharing your proposal with us before you spend a lot of time writing
code. Even when your proposal gets validated by the community, we still recommend
doing the actual work as a series of small, self-contained commits. This makes
the reviewer’s job much easier and increases the timeliness of feedback.

When it comes to C and C++ parts of Greenplum, we try to follow
PostgreSQL Coding Conventions.
In addition to that we require that:

  • All Python code passes Pylint
  • All Go code is formatted according to gofmt

We recommend using git diff --color when reviewing your changes so that you
don’t have any spurious whitespace issues in the code that you submit.

All new functionality that is contributed to Greenplum should be covered by regression
tests that are contributed alongside it. If you are uncertain on how to test or document
your work, please raise the question on the gpdb-dev mailing list and the developer
community will do its best to help you.

At the very minimum you should always be running
make installcheck-world
to make sure that you’re not breaking anything.

Changes applicable to upstream PostgreSQL

If the change you’re working on touches functionality that is common between PostgreSQL
and Greenplum, you may be asked to forward-port it to PostgreSQL. This is not only so
that we keep reducing the delta between the two projects, but also so that any change
that is relevant to PostgreSQL can benefit from a much broader review of the upstream
PostgreSQL community. In general, it is a good idea to keep both code bases handy so
you can be sure whether your changes may need to be forward-ported.

Submission timing

To improve the odds of the right discussion of your patch or idea happening, pay attention
to what the community work cycle is. For example, if you send in a brand new idea in the
beta phase of a release, we may defer review or target its inclusion for a later version.
Feel free to ask on the mailing list to learn more about the Greenplum release policy and timing.

Patch submission

Once you are ready to share your work with the Greenplum core team and the rest of
the Greenplum community, you should push all the commits to a branch in your own
repository forked from the official Greenplum and
send us a pull request.

We welcome submissions which are work in-progress in order to get feedback early
in the development process. When opening the pull request, select “Draft” in
the dropdown menu when creating the PR to clearly mark the intent of the pull
request. Prefixing the title with “WIP:” is also good practice.

All new features should be submitted against the main master branch. Bugfixes
should too be submitted against master unless they only exist in a supported
back-branch. If the bug exists in both master and back-branches, explain this
in the PR description.

Validation checks and CI

Once you submit your pull request, you will immediately see a number of validation
checks performed by our automated CI pipelines. There also will be a CLA check
telling you whether your CLA was recognized. If any of these checks fails, you
will need to update your pull request to take care of the issue. Pull requests
with failed validation checks are very unlikely to receive any further peer
review from the community members.

Keep in mind that the most common reason for a failed CLA check is a mismatch
between an email on file and an email recorded in the commits submitted as
part of the pull request.

If you cannot figure out why a certain validation check failed, feel free to
ask on the developer’s mailing list, but make sure to include a direct link
to a pull request in your email.

Patch review

A submitted pull request with passing validation checks is assumed to be available
for peer review. Peer review is the process that ensures that contributions to Greenplum
are of high quality and align well with the road map and community expectations. Every
member of the Greenplum community is encouraged to review pull requests and provide
feedback. Since you don’t have to be a core team member to be able to do that, we
recommend following a stream of pull reviews to anybody who’s interested in becoming
a long-term contributor to Greenplum. As Linus would say
“given enough eyeballs, all bugs are shallow”.

One outcome of the peer review could be a consensus that you need to modify your
pull request in certain ways. GitHub allows you to push additional commits into
a branch from which a pull request was sent. Those additional commits will be then
visible to all of the reviewers.

A peer review converges when it receives at least one +1 and no -1s votes from
the participants. At that point you should expect one of the core team
members to pull your changes into the project.

Greenplum prides itself on being a collaborative, consensus-driven environment.
We do not believe in vetoes and any -1 vote casted as part of the peer review
has to have a detailed technical explanation of what’s wrong with the change.
Should a strong disagreement arise it may be advisable to take the matter onto
the mailing list since it allows for a more natural flow of the conversation.

At any time during the patch review, you may experience delays based on the
availability of reviewers and core team members. Please be patient. That being
said, don’t get discouraged either. If you’re not getting expected feedback for
a few days add a comment asking for updates on the pull request itself or send
an email to the mailing list.

Direct commits to the repository

On occasion you will see core team members committing directly to the repository
without going through the pull request workflow. This is reserved for small changes
only and the rule of thumb we use is this: if the change touches any functionality
that may result in a test failure, then it has to go through a pull request workflow.
If, on the other hand, the change is in the non-functional part of the code base
(such as fixing a typo inside of a comment block) core team members can decide to
just commit to the repository directly.

Documentation

For Greenplum Database documentation, please check the
online documentation.

For further information beyond the scope of this README, please see
our wiki