DNVM, DNX, and DNU - Understanding the ASP.NET 5 Runtime Options

Posted on May 17 2015 08:13 PM by jatten in ASP.Net, ASP.NET MVC, C#, CodeProject   ||   Comments (6)

give-my-regards-to-mr-escher-240ASP.NET 5 introduces a new runtime model for the .NET framework which allows us to employ a "pay-as-you-go" approach, building composable applications using only those framework components our application needs, without relying upon a central, monolithic library repository present on the host machine.

This new model also provides us with command line tools for managing our .NET version selection, library packages, and execution environment outside of Visual Studio. It is now possible to develop cross-platform ASP.NET applications using a text editor and command line (either CMD or Powershell on Windows) without ever opening Visual Studio.

Image by César Astudillo  |  Some Rights Reserved

Understanding the relationship between the .NET Version Manager (DNVM), the .NET Execution Environment (DNX) and .NET Development Utilities (DNU) is fundamental to developing with ASP.NET 5. In this post we will look at installing and using DNVM, DNX, and DNU on to work with ASP.NET from the command line.

DNVM - The .NET Version Manager

DNVM is a version manager tool for the command line. As its name implies, DNVM provides the functionality needed to configure your .NET runtime. We can use DNVM to specify which version of the .NET Execution Environment to use at the process, user, or machine level.

DNX - The .NET Execution Environment

What is this DNX, anyway? From the ASP.NET Documentation site:

The .NET Execution Environment (DNX) is a software development kit (SDK) and runtime environment that has everything you need to build and run .NET applications for Windows, Mac and Linux. It provides a host process, CLR hosting logic and managed entry point discovery. DNX was built for running cross-platform ASP.NET Web applications, but it can run other types of .NET applications, too, such as cross-platform console apps.

Broadly speaking, different DNX runtime versions are available which reflect which .NET framework version you want to use in your application. At a basic level, there are different versions for:

  • .NET Framework - The familiar .NET framework we all know and love.
  • .NET Core - A subset of the .NET Framework which includes a modular runtime and library implementation and is managed via Nuget. .NET Core is cross-platform, open source, and includes the Core libraries in CoreFX, and a Core runtime in CoreCLR.
  • Mono - Using Mono, we can create ASP.NET application which compile and run on OSX and Linux machines.

We can use the .NET Version Manager to select which DNX version we want to use with our application.

DNU - .NET Development Utilities

DNU is a command-line tool which provides a variety of utility functions to assist with development in ASP.NET. Most commonly, we will use DNU to install and manage library packages in our application, and/or to package and publish our own application.

DNU uses Nuget behind the scenes for package management and deployment.

Getting Started - Installing DNVM

If you have installed Visual Studio 2015 (currently available as a Release Candidate HERE), you already have the .NET Version Manager installed.

UPDATE: According to the ASP.NET team  (thanks Glenn!) DNX/DNVM isn't actually installed until you run File -> New after VS 2015 is installed, so if you have VS 2015 installed but have not cracked open a new file yet, you won't have it on your machine yet.

If you want to upgrade to the most recent version of DNVM, or if you want to work with ASP.NET, but don't want to install Visual Studio, you can download the DNVM script as follows from either the Windows Command prompt (CMD) or using Powershell:

Install or Upgrade DNVM Using CMD:
@powershell -NoProfile -ExecutionPolicy unrestricted -Command "&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}"

 

Install or Upgrade DNVM Using Powershell:
&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}

 

Once we have DNVM on our machine, we should be able to fire up the Windows command prompt, and get started.

At the command line (or Powershell prompt) run the following to see that DNVM is, in fact, installed on your machine:

Run DNVM without Arguments:
> dnvm

 

If all has gone well to this point, you should see the following (we're going to stick with CMD for this post, to keep things simple):

Console Output from Running DNVM without Arguments:

run-dnvm-no-args

Now that we have DNVM installed and running, let's see which DNX versions we have available, and how to install DNX versions if we need.

Working with DNVM - List the Available .NET Runtimes

To examine the runtime options available to us, we can run the following command from the Console:

List Available DNX Runtimes Using DNVM List:
> dnvm list

 

I had previously installed Visual Studio 2015 on my machine, which also installed a few DNX versions, so when I runt he above I see the following output:

Console Output from DNVM List:

run-dnvm-list

Because I had installed Visual Studio 2015 already, there are DNX versions available for .NET Framework (the beta4 clr in the image above) and also .NET Core (the coreclr above) for both 32 and 64 bit architectures.

If you have NOT installed Visual Studio 2015, your list is very likely empty. Installing DNVM does not, in and of itself, install any DNX versions on your machine. No worries, we'll take care of that next.

Working with DNVM - Installing DNX Runtime Versions

If you just installed DNVM fresh, or if you want to get the latest version of a particular DNX implementation, we can use the the dnvm install command with some appropriate arguments:

By default, DNVM will work with the basic .NET framework, and assume the x86 architecture. For example, if we do the following:

DNVM Install Latest:
> dnvm install latest

 

DNVM will download and install the latest stable version of the regular .NET framework for the x86 architecture. If we run dnvm list again, we see the following:

Installed DNX Runtimes after Running DNVM Install Latest:

run-dnvm-install-latest-noargs

If we want to specify a different runtime (say, the CoreCLR instead of the .NET Framework, and/or specify the x64 version, we need to add the -r ( -runtime ) and/or -arch ( -architecture ) flags, and values:

DNVM Install Latest for Specific Runtime/Architecture:
> dnvm install latest -r coreclr -arch x64

 

Once the above is finished running, we have now added the latest version of the full .NET Framework for the x86 architecture, and CoreCLR for the x64 architecture to our available runtimes list:

Installed DNX Runtimes:

run-dnvm-list-after-install

DNVM Upgrade VS. DNVM install

DNVM also has an upgrade command, which behaves just a little differently than Install.

DNVM Install fetches the specified version, and makes it active in the current Command process by adding it to the process PATH variable. The selection is "Active" is only for the duration of our terminal session. Once the current terminal session is closed, the active selection will revert to none, or whatever had been previously persisted in our User PATH variable as the default.

DNVM Upgrade does essentially the same thing, but also adds it to the User's PATH variable as the default active version, and updates the default alias. For example, if we used dnvm upgrade to install the latest 64 bit version of the .NET framework clr:

Upgrade .NET Framework CLR to Latest Version Using DNVM Upgrade:
> dnvm upgrade -r clr -arch x64

 

After running the above, our Console output tells us that the latest version of dnx-clr-win-x64 has been added not only to the current process PATH, but also to our User PATH. In addition, the "default" alias has been updated to point to the just-installed version:

Console Output from DNVM Upgrade:

dnvm-upgrade

Note that dnvm upgrade pulls down the latest stable release from Nuget. If you want to work on the bleeding edge with the latest unstable (development) release, you can use:

Upgrading to Latest Unstable (development) Runtime Releases:
> dnvm upgrade -u

 

In the above, you can specify runtime/architecture as previously using the -r and -arch flags.

5/17/2015 - NOTE: If you installed Visual Studio 2015, the system environment PATH variable likely points to C:\Program Files\Microsoft DNX\Dnvm\. As of this writing, the Visual Studio version of DNVM does not recognize the -u (-unstable ) flags. Even if you installed DNVM using the shell scripts at the beginning of this article, the Visual Studio variable will be found first. In order to use the -u flag, you will need to either change the order in which the PATH variables occur, or remove the Visual Studio PATH and add DNVM using one of the scripts above.

Working with DNVM - Selecting or Changing the Active DNX Runtime

Notice in the previous section the asterisk that appears in the "Active" column of the console output once we started installing new DNX runtime versions. When we use dnvm Install, DNVM makes the just-installed version the default active version for the current process (in other words, for the current terminal session only).

As noted previously, when we run dnvm upgrade, DNVM makes the just-installed version the default active version not only for the current process, but also at the User level.

We can switch to a different DNX runtime by using the dnvm use command. By default, dnvm use changes the active selection for the current Command process. For example, given the current active runtime in the console output from the previous section (1.0.0-beta4-11566 clr x64), we could decide we want to switch to the latest 64 bit coreclr version for our current terminal session instead:

Select or Switch to a Different Runtime for the Current Process:
> dnvm use 1.0.0-beta4-11566 -r coreclr -arch x64

 

If we run the above command, and then run dnvm list again, we see our active version has changed:

New Active DNX Runtime Selected for Current Process:

switch-dnx-version-for-process-path

If we look closely at the console output above though, we see that the new version was added to the process PATH, meaning, the selection will persist for the current Console process only. If we close the console window, and open another, the default set in our User PATH will be selected again.

If we want to set our user default, we could run the command just like we did previously, but instead we can add the -p ( -persistent) flag, which will cause the selection to be added to our user path instead of the process path:

Select or Switch to a New DNX Runtime Default for User PATH:
> dnvm use 1.0.0-beta4-11566 -r coreclr -arch x64 -p

 

Now, our selection will persist as the default between Command sessions. We can still select a different version per process just like we did previously, but until we make additional changes, the default active DNX runtime in a Command session will be the beta 4 CoreCLR from build 11566.

Remove Runtime References from PATH Variables

If we want to remove all runtime references from the process path (in other words, return to a state where no runtime is set as "Active" we can run:

Remove DNX Runtime References from Process PATH Variable:
> dnvm use none

 

If we want to remove any defaults we have set in our User PATH variable, we can add the -p flag:

Remove DNX Runtime References from User PATH Variable:
> dnvm use none -p

 

Use Aliases to Make Typing Easier

If you have been working through the examples so far, you may have noticed how painful it can be typing out the full, semantic version names each time you want to refer to a runtime version. DNVM affords us the ability to assign aliases to the different installed versions. We can do this during install or upgrade using the -a ( -alias ) flag, or we can set aliases for existing installed versions using the alias command.

For example, we already have an alias, default, which in my case was set up when I installed Visual Studio. It was originally assigned to the regular x86 .NET framework CLR, and then was re-assigned when we ran dnvm upgrade in the previous section. As of now, the alias default on my machine points to the latest (build 11566) x64 bit version of the .NET CLR.

I can set an alias for the latest x64 version of CoreCLR like so:

Assign an Alias to a DNX Runtime:
> dnvm alias core-64-latest 1.0.0-beta4-11566 -r coreclr -arch x64

 

Having done that, I can now refer to that particular runtime by its alias, for example:

Select or Switch to a new DNX Runtime Using an Alias:
> dnvm use core-64-latest

 

5/17/2015 - NOTE: There is currently a bug (see issue #175 / PR #248 at the DNVM Repo on Github) in which referring to a CoreClr runtime by alias does not work. As of this writing, the PR has not yet been merged, but things are supposed to work as described here. Until the issue is resolved, it is still necessary to refer to any CoreClr runtime by its full version name…

We can re-assign a previously set alias by simply using the dnvm alias command, and assigning the alias to a different DNX Runtime.

DNX and DNU - Working with an Example Project from the Command Line

To get a feel for how DNVM, DNU, and DNX work together, we'll put together a very basic example project, using only the Windows Command Prompt and a text editor (for this post, I'm going to use Sublime Text 3, but we could just as easily use Visual Studio Code, or Notepad).

First, let's create a project directory, and then open the folder in Sublime Text:

Create a Project Directory and Navigate into the Directory using CMD:
C:\Users\John> mkdir dnx_demo
C:\Users\John> cd dnx_demo

 

Then, open the directory folder in Sublime text. First, we'll add a project.json file:

Add project.json File:
{
  "version": "1.0.0-*",
  "description": "Silly demo project",
  "commands": {
    "runme": "dnx_demo"
  },
  "frameworks": {
    "dnx451": { },
    "dnxcore50": {
      "dependencies": {
        "System.Console": "4.0.0-beta-22816",
        "Microsoft.CSharp": "4.0.0-beta-22816"
      }
    }
  }
}

 

In the above, note we have defined a command, "runme" which refers to the project name. We can invoke this command from the Command Line using dnx to run our project. Also note, we have specified both the traditional .NET Framework (dnx451) as well as the .NET Core (dnxcore50) as compilation targets, so our application wioll be cross-compiled for both DNX runtimes and frameworks.

Next, add a file named Program.cs and paste in the following code:

The Program.cs File:
using System;
namespace dnx_demo
{
    public class Program
    {
        public void Main(string[] args)
        {
            Console.WriteLine("No Visual Studio Here!!");
            Console.Read();
        }
    }
}

 

By simply adding these two files, we have (almost) everything we need to run a very basic, but complete, .NET Console application. Save the two files above, and return to the Command prompt.

Use DNU Restore to Restore Packages

Our project.json file specifies the target frameworks and dependencies our project requires. Note that for this project, there are no packages to be pulled down when we run against the standard .NET framework 4.5.1 - the required references are already present in the .NET framework itself.

We will need to restore packages before we can run against the .NET Core framework and CoreCLR. Remember, .NET Core is all about "pay as you go." In other words, We pull in only those libraries we need for our application to run, and leave out everything else.

We can restore library packages by simply running dnu restore from within our project directory:

Use DNU Restore to Restore Library Package Dependencies:
C:\Users\John\dnx_demo> dnu restore

 

Run the Example Application Using DNX

With our package dependencies restored, we can run our application using DNX. We can now run the application, after we select which DNX runtime we want to use.

In my case, we set the default active runtime to the latest .NET Core CLR / x64 in a previous example, so we can run the application straight away from within our project directory:

Run the Example Application Using DNX:
C:\Users\John\dnx_demo> dnx . runme

 

Take a close look there. Since I am already in the project directory, I can simply type dnx, and use a period to indicate the current directory. DNX will parse the command, runme, we set up in our project.json file, and use that to run the project.

dnx-run-example-using-command

We could also simply reference the project by name:

Run the Example by Referencing the Project Directory Name:
C:\Users\John\dnx_demo> dnx . dnx_demo

 

Or, as long as we are in the project directory, we could keep it REAL SIMPLE and just use dnx . run

Run the Example Using dnx . run:
C:\Users\John\dnx_demo> dnx . run

 

Now, let's switch runtimes from .NET Core to the traditional .NET Framework. Recall, we had previously used dnvm upgrade to install the latest x64 binaries for the .NET framework, and when we did that, our default alias was assigned to that runtime:

Switch to the .NET Framework Runtime Using DNVM Use:
C:\Users\John\dnx_demo> dnvm use default

We can check to make sure we are now targeting the .NET CLR instead of CoreCLR, and then run our application again:

Running the Example Using the .NET Framework and Standard CLR:

dnx-run-example-using-clr

 

Summing Up

So far, we've examined the basics of using the .NET Version Manager, the .NET Development Utility (DNU), and the .NET Execution Environment in order to get a basic application up and running in the new ASP.NET environment.

In our simple example above, we compiled and executed our project in memory. There is no generation of binaries as part of the dnx run process. DNX, and DNU offer a wealth of additional features, such outputting binaries as a part of the build process (dnu build), creating Nuget packages (dnu pack) and other useful commands.

DNU, DNX, and DNVM are currently in active development, and things are changing daily. Keep your eye on the ASP.NET 5 repo, and watch for updates here and elsewhere.

We will explore more in upcoming posts

Additional Resources and Items of Interest

 

Posted on May 17 2015 08:13 PM by jatten     

Comments (6)

Use Postgres JSON Type and Aggregate Functions to Map Relational Data to JSON

Posted on April 22 2015 05:11 AM by jatten in Postgres, Database, CodeProject   ||   Comments (0)

name-that-tile-cropped

Postgres is just too cool. Because Postgres can work with both JSON and arrays as first-class data types, it is possible to perform some very handy mappings on the server end which would become potential performance bottlenecks on the client side.

When working in Node.JS, it might be handy to do some heavy lifting via Postgres to reduce some n+1 / lazy loading issues, especially when pulling data for display. Using some of Postgres’ in-built JSON  functions in conjunction with the JSON data type, we can compose ready-to-use JSON objects before returning the result.

In this post, we’ll take a quick look at the row_to_json() function, the array_to_json() function, and the json_agg() function, and see how between these three we can shape relational data on the database side, possibly in a much more performant manner than we might within our application.

First off, lets’ aggregate some row data into a JSON object.

Example Data from Chinook (modified)

For the examples which follow, I will be using a slightly modified version of the Chinook database, which I tweaked to be a little more Postgres-friendly.

Consider the following tables from Chinook (with Sample Data added):

The Chinook Artists and Albums Table:
CREATE TABLE albums
(
    id int DEFAULT nextval('albums_id_seq'::regclass) NOT NULL,
    title VARCHAR(160) NOT NULL,
    artist_id INT NOT NULL,
    CONSTRAINT pk_albums PRIMARY KEY  (id)
);
CREATE TABLE artists
(
    id int DEFAULT nextval('artists_id_seq'::regclass) NOT NULL,
    name VARCHAR(120),
    CONSTRAINT pk_artists PRIMARY KEY  (id)
);

 

For our purposes the artists and albums tables, populated as they are, will do admirably.

Transform Row Data to a JSON Object Using row_to_json()

We can use the Postgres row_to_json() function in a straightforward manner to take a full row of data, and squeeze it into a JSON object before the result set is returned.

For example, let’s grab all the artist records, and return rows of JSON:

Transform Artist Records to JSON:
select row_to_json(artists)
from (
  select * from artists
) as artists

 

Running the query above gives output like so:

Result Set from row_to_json on the Artists Table:
{"id":1,"name":"AC/DC"}
{"id":2,"name":"Accept"}
{"id":3,"name":"Aerosmith"}
// ... a bunch more artist records ...

 

We could modify the above, and add some criteria so that we are limiting the records to to a specific artist like so:

Transform a Specific Artist Record to JSON:
select row_to_json(artists)
from (
  select * from artists where id = 12
) as artists

 

Running the above with criteria set to artist id 12 (which happens to be Black Sabbath) returns the following predictable result:

Result Set from row_to_json() with Criteria:
{"id":12,"name":"Black Sabbath"}

 

Now, each artist can have any number of albums in the albums table. Wouldn’t it be nice if we could grab the albums for a specific artist, and transform them into JSON array?

Aggregate Rows into a JSON Array Using the json_agg() Function

Postgres offers us the json_agg() function, which takes an input values and aggregates them as a JSON array:

For example, we might want to aggregate the album records for a specific artist into a JSON array:

Aggregate Album Records into JSON Array using json_agg():
select json_agg(albums)
from (
  select * from albums where artist_id = 12
) as albums;

 

Running this query returns the following:

Result Set from json_agg() on the Albums Table:
[{"id":16,"title":"Black Sabbath","artist_id":12}, 
 {"id":17,"title":"Black Sabbath Vol. 4 (Remaster)","artist_id":12}]

 

There we go… we now have an array of albums for a specific artist. Now, can we expand on this, and return the artist record as a JSON object, including an albums property represented by such an array, containing all the albums for each artist?

Why, yes, yes we can!

Aggregate Parent and Child Records into a singe JSON Object in the Result Set

Let’s return each row as a complete JSON object, such that each artist record is represented as JSON, and contains an albums property which itself contains an array of album objects.

We can aggregate the album records for each artist using json_agg() in a correlated subquery with the artists table, and then we can pass each row to the row_to_json() function to transform the result into complete JSON object, ready for use in our application.

Per usual with correlated subqueries, keep your eye on the numerous database object aliases here. Naming can get a little funky…

Transform and Aggregate Artist and Album Records into JSON Objects:
select row_to_json(art) as artists
from(
  select a.id, a.name, 
  (select json_agg(alb)
  from (
    select * from albums where artist_id = a.id
  ) alb
) as albums
from artists as a) art;

 

The result of the query above:

Result Set from Correlated Transform/Aggregate Query:
  {"id":1,"name":"AC/DC","albums":[
    {"id":1,"title":"For Those About To Rock We Salute You","artist_id":1}, +
    {"id":4,"title":"Let There Be Rock","artist_id":1}
  ]}
  {"id":2,"name":"Accept","albums":[
    {"id":2,"title":"Balls to the Wall","artist_id":2},                    +
    {"id":3,"title":"Restless and Wild","artist_id":2}
  ]}
  {"id":3,"name":"Aerosmith","albums":[
    {"id":5,"title":"Big Ones","artist_id":3}
  ]}
  // ... a bunch more records ...

 

Voila – we have ready-to-use JSON to return to our application. Most importantly, we have completely assembled artist objects, with albums available as an array via the albums property on each artist

The Tip of the Iceberg

Postgres offers a wealth of innovative features such as the JSON / JSONB data type, the array data type, HStore, and plenty of functionality built around each. What we have seen here is a simple solution to one of the common challenges presented by data access and managing parent/child relationships – the lazy loading / n+1 problem.

By flexing the power of Postgres’ JSON type and associated functions, we have blurred the line between document and relational storage.

If you haven’t taken Postgres for a spin yet, I strongly suggest you do. There’s more where that came from.

Additional Resources and Items of Interest

 

 

Posted on April 22 2015 05:11 AM by jatten     

Comments (0)

Installing and Configuring PostgreSQL 9.4 on Linux Mint/Ubuntu

Posted on April 19 2015 07:41 AM by jatten in Postgres, Database, Linux, CodeProject   ||   Comments (1)

psql-header

Installing and configuring PostgreSQL on a Linux box is either simple, because you are a Linux and/or Postgres expert, or not so simple, because you are new(er) to Linux, Postgres, or both. Over the past year, I have resided firmly in the latter camp. I am a huge fan of the Postgres database platform, and I have been slowly but steadily trying to improve my Linux chops.

If you are an experienced Linux user, or a PostgreSQL DBA, this is not the post for you, although your feedback and constructive criticism are most welcome – if you see something amiss, please do let m eknow in the comments, or via email.

I should note here that, where practical, I do as much as I can on Linux from the terminal. While the desktop/GUI is handy for some tasks, I am doing my level best to become proficient with the terminal in Linux. I strongly recommend doing the same. Which is how we’re going to do things here.

The installation process for Postgres on Ubuntu or Linux Mint is, like many things in the Linux world, less than intuitive for new users. As much for my own sake as anyone else’s, I’m going to walk through the steps to getting Postgres installed and configured on a Linux box.

Why Postgres?

PostgreSQL is a fantastic database platform. Postgres is open source, cross-platform, free, and offers an amazing feature set which, in my mind, exceeds those of its principle peers in the relational database space.

Postgres offers all of the (mostly) standards-compliant SQL/relational database feature you would expect, plus a host of exciting and innovative features. Highlights include a JSON datatype (and also JSONB!), an array datatype, and the new HStore type, which essentially allows the specification of a column as containing a list of key/value pairs. We’ll take a tour of PostgreSQL in another post, but first, let’s get the thing installed and running.

Installation Steps for PostgreSQL on Linux Mint or Ubuntu

At the moment, my preferred Linux distro is Linux Mint 17 (“Quiana”) with the Cinnamon desktop. This is a long-term support release of the Linux Mint distro, very stable, and an excellent place to start. If you do not have a dedicated Linux machine, it is simple enough to spin up a VM using Virtual Box

As of this writing, the most recent version of PostgreSQL is version 9.4, which brought with it some very cool features such as full JSONB support. However, the 9.4 release is not available directly using the Advanced Packaging Tool (APT) or the Linux Mint/Ubuntu Software Manager.

Fortunately, the PostgreSQL Global Development Group (PGDB) maintain an APT repository of PostgreSQL packages for Debian and Ubuntu-derived Linux distros.

Before we can install Postgres, we need to add the package source for the distro we are using. In my case, I am using Linux Mint 17, which is derived from, and compatible with, the Ubuntu 14.04 (“Trusty Tahar”) release. We’ll see why this matters in a moment.

Add the Postgres Package Source for Your Linux Release

We need to create a sources file reflecting the proper Postgres source for our particular distro. In my case, as noted above, we need the source compatible with the “Trusty” release of Ubuntu. So we can do this from the terminal to add the file (make sure you use sudo in all of the following steps):

Add the PGDB APT Source file From the Terminal:
~ $ sudo touch /etc/apt/sources.list.d/pgdg.list

 

Now that the file exists, open in your editor of choice (we’ll use gedit here):

Open the pgdg.list File in gedit (use sudo):
~ $ sudo gedit /etc/apt/sources.list.d/pgdg.list

 

then add the following line in gedit and save (where I used “trusty” below, use the name of your release for Ubuntu, or the corresponding Ubuntu release if you you are using Linux Mint):

Add the Postgres Package Repository and Specify Your Distro Release:
deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main

 

Save, and close gedit.

Alternately, we can achieve all of the above in one shot from the terminal like so (take note of the placement of single and double quotes here…):

Add the Package Source in one multi-line Terminal Command:
~ $ sudo sh -c \
'echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main" > \
/etc/apt/sources.list.d/pgdg.list'

 

Add the Postgres Package Repository Key

Next, add the package repository key:

Add the Postgres Package Repository Key:
~ $ sudo apt-get install wget ca-certificates
~ $ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

 

Update, Upgrade, and Install Postgres

Then, we need to update our package sources:

Update Package Sources:
~ $ sudo apt-get update

 

Once that’s done, we need to upgrade packages to the latest versions:

Upgrade Packages:
~ $ sudo apt-get upgrade

 

Note, this can be a long process. Also, you may be prompted at several points to make some choices about configuration items. Specifically, you may informed that this that or the other configuration file has been changed, and asked if you want to keep your original version, or replace with the package maintainer’s version. Select “Y” to accept the package maintainer’s version in these cases.

When the upgrade process finishes, we are ready to install Postgres (we’ll also include pgadmin3):

Install Postgres:
~ $ sudo apt-get install postgresql-9.4 pgadmin3

 

This shouldn’t take long (especially compared to the installation times for certain other GUI – installer-based database platforms. I’m looking at you, SQL Server). Once the installation completes, we’re ready to configure our new database.

Configuring Postgres for Use

We’ve now installed both PostgreSQL and the database management utility Pg Admin 3. Next, we should understand a few things about how PostgreSQL works, out of the box.

The Postgres User

When PostgreSQL was installed, a system user account named postgres was created. with a matching user account in Postgres. By default, the postgres user account is not configured with a password, so it is not possible to log into the server using the postgres user account without first creating a password for it. This postgres account has an all-access pass on your postgres database server, permission-wise. The postgres user account is analogous to the sa account in SQL Server. For security reasons, it is recommended that a password not be created for the postgres account.

The Postgres Database

PostgresSql is installed with a default database named…wait for it… postgres. From the PostgreSQL documentation:

Creating a database cluster consists of creating the directories in which the database data will live, generating the shared catalog tables (tables that belong to the whole cluster rather than to any particular database), and creating the "template1" and "postgres" databases . . .

. . . The postgres database is a default database meant for use by users, utilities and third party applications.

For the most part, we use the postgres database for admin purposes, and create new databases on the PostgreSQL server to suit our needs.

The psql Command Line Utility

PostgreSQL includes psql, a command line utility for managing your databases and server. While a GUI-based utility such as pgadmin3 is often easier to use in the day-to-day, the command line utilty psql is also handy. Psql offers complete control of your Postgres system from the terminal, including the ability to execute SQL queries.

Also, we need to use psql to perform our initial configuration, and to create an initial database super user.

Create a Super User Account

Since we will not be creating a password for the postgres user account, we need a super-user account in order to work with our database in the day-to-day.

To do this, we will gain access to the postgres account through your system root user, and then use that postgres account to create a new super-user account on your Postgres installation which can be regulated more effectively. As an example, from my own machine (comments denoted by ##):

Access the Postgres User Through Root:
## switch user to root:
xivsolutions@mint-vm ~ $ su -
Password: 
## switch user to postgres:
mint-vm ~ # su - postgres
postgres@mint-vm ~ $ 

 

As we can see, we now have a prompt for the postgres user. We can now log in to the default postgres database and, using psql, create a super user account for ourselves:

To get into the psql command line utility, we type the following:

Enter the psql Command Line Utility:
postgres@mint-vm ~ $ psql

 

Now, from the psql prompt, enter the following. Note the name you specify for your super-user account should match the system user account you plan to use to manage your Postgres Installation (use your own user account name in place of youruseraccount here):

Create a New Super User from the psql Prompt:
postgres=# CREATE USER youruseraccount
postgres-# WITH SUPERUSER CREATEDB CREATEROLE
postgres-# PASSWORD 'userAccountPassword';

 

Notice in the above we can enter multiple lines of SQL. The SQL is not executed until we enter a semi-colon followed by enter. Which means, the semi-colon matters!

Now, we can exit psql, exit the postgres user, and exit root like so:

Return to your normal user account:
postgres=# \q
postgres@mint-vm ~ $ exit
logout
mint-vm ~ # exit
logout
xivsolutions@mint-vm ~ $ 

 

With that, we should now be able to log in using psql and make sure everything is wired up correctly.

Log In Using Psql and Create a Test Database

Now, just to make sure everything is working correctly, let’s log in with psql using our new super user account and create a quick test database:

Log-In Using the New Super-User Account:
xivsolutions@mint-vm ~ $ psql postgres
psql (9.4.1)
Type "help" for help.
postgres=# 

 

Note in the above, we specified the postgres default database when we logged in, since there aren’ tyet any other databases to connect to. We’ll use the default postgres as our admin platform, create a new database, and then connect to the new database to test things out.

So, let’s create a database to play with (once again, make sure to end the SQL statement with a semi-colon!):

Create a Test Database Using Psql:
postgres=# CREATE DATABASE test_db WITH OWNER xivsolutions;

 

Now that we have our own database to mess with, use the \connect command to switch psql to that:

Switch the Active Psql Connection to the new Test Database:
postgres=# \connect test_db;
You are now connected to database "test_db" as user "xivsolutions".
test_db=# 

 

Now, let’s whip up a quick and dirty table, insert some data, and query:

Create Table, Insert Data, and Select Query Using Psql:
test_db=# CREATE TABLE products (id SERIAL PRIMARY KEY, name TEXT);
test_db=# INSERT INTO products (name) VALUES ('Brass Widgets');
INSERT 0 1
test_db=# SELECT * FROM products;
 id |     name      
----+---------------
  1 | Brass Widgets
(1 row)

 

This is a quick example of what can be done from the psql command line. psql is a powerful tool, and is very much worth exploring. While GUI-based tools like PG Admin 3 and others certainly can be helpful, gaining a certain level of comfort working with Postgres from the terminal will save you time in many cases.

That said, let’s configure PG Admin 3, so we can have our cake and eat it too.

Configure PG Admin 3 for Local Connections

PG Admin 3 is a free, open source GUI database management tool for Postgres. While the GUI itself is not as polished as some, all the essentials are there. We already installed PG Admin 3 when we installed Postgres itself, so let’s take a look.

First off, open PG Admin 3from the terminal:

Open PG Admini 3:
xivsolutions@mint-17 ~ $ pgadmin3

 

You will probably see something like this:

The PG Admin 3 GUI After Opening:

pgadmin-gui-default

Before we can do much of anything here, we need to add our new database server. Since we are working on our local machine, we’ll go ahead and add a local server, which points to localhost. Use File –> Add Server… to open the Server Registration Dialog, and you should see something like the following:

Register your Postgres Server with PG Admin 3:

pgadmin3-new-server-dialog

Note the default values you see in the above may differ slightly on your machine. We’re going to provide a name by which our local server will be known, specify the host, and make sure that the User Name matches our new Super User name (which in most cases should match our system user name, although this is not a requirement).

As an example, on my machine, I set things up like so:

Enter PG Admin Server Registration Items for Local Postgres Database:

pgadmin3-register-server-completed

In the above note that I deselected the “Store Password” option. In general, storing of Postgres user passwords is not recommended, although you may consider your circumstances and proceed accordingly. For example, if you are setting up a local development installation on your laptop, you may choose to store the password anyway. Next, hit “OK” and voila – our existing Postgres database server is available in PG Admin 3:

Postgres Database Server Registered in PG Admin 3:

pgadmin3-logged-in

Now you can use PG Admin 3 to create new databases, tables, query, and all the other tasks you might expect. While the GUI experience is not as polished as some commercially available alternatives, you will find that PG Admin 3 is highly functional, and can handle most of your database administration needs quite well.

Postgres Configuration Options – Database Files and Directory Location

When we performed our installation and configuration above, we basically allowed Postgres to use its own sensible defaults for most things. Most of the time, if you are setting up Postgres on your own machine, for development or just to mess around, these are sufficient. However, understanding some of the basic configuration options, how and where these are set, can be most helpful.

When Apt installs Postgres, it creates a few directories by default:

/etc/postgresql/9.4/main – Primary PostgreSQL configuration files are located here. This directory is owned by root, so you will need to elevate your privileges to edit files in here. Of particular interest to us are two files:

  • pg_hba.conf – Configuration parameters for client authentication and access is specified in this file.
  • postgresql.conf – Configuration parameters for important file locations, logging, resource usage, and a host of other database configuration items are set here. For our purposes, most of the defaults will do for now. However, the file locations may be of interest, as we’ll see in a moment.

/var/lib/postgresql/9.4/main – Default location for the database cluster and associated files. Above, when we initialized our database, we were working with database files in this location.

Specify a Different Directory for the Database Cluster Files

For a variety of reasons we may wish to specify a different directory for our database files. We may want to keep them more easily accessible, or we might have set up our system with a separate partition for data files. In any case, if we want to use a location other than the default for our database cluster, we can.

The Postgres cluster directory must be owned by the postgres user, so we need to create the directory, and then assign ownership to postgres.

For our example, we’ll create a /database directory in our file system root, and within that a pg subdirectory:

Create a New Directory for Postgres Data:
$ sudo mkdir –p /database/pg
$ sudo chown -R postgres /database

 

In the above, the –p flag tells bash to make sure any leading directories are created on the way to the final directory. The –R flag used with chown tells chown to work recursively, and apply the ownership change to any nested directories or files.

Now we need to initialize the new database cluster in the new directory. Note, if you are already running an existing postgres instance, it is best to stop the server before moving the data directory:

Stop Existing Postgres Instance:
$ sudo service postgresql stop

 

Once again, we will need to get to our postgres user through root, and then initialize the new cluster (note – I’m showing the full terminal prompt here for clarity, so we can see which user privileges are used):

Initialize the New Postgres Cluster Using initdb:
xivsolutions@mint-vm / $ su -
mint-vm ~ # su - postgres
postgres@mint-vm ~ $ /usr/lib/postgresql/9.4/bin/initdb -D /database/pg
postgres@mint-vm ~ $ exit
mint-vm ~ # exit
xivsolutions@mint-vm / $

 

Don’t forget to exit from the postgres user, and then also exit root as shown.

Next, we need to edit the postgresql.conf file and point the data_directory variable to the new cluster location. We can open the file with elevated privileges in gedit like so:

Open the postgresql.conf File with Elevated Privileges in Gedit:
$ sudo gedit /etc/postgresql/9.4/main/postgresql.conf

 

Scroll down in the file to the file locations section, comment out the existing entry for data_directory (using the ‘#’ character), and add a new entry pointing to our new directory like so:

Replace the data_directory Parameter to Point at the New Directory Location:
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
# Add a '#' to the beginning of the line below:
# data_directory = '/var/lib/postgresql/9.4/main'    # use data in another directory
# THIS IS OUR NEW DIRECTORY PARAMETER:
data_directory = '/database/pg'          
# Blah blah more configuration stuff . . . 

 

Save and exit Gedit, then re-start the postgres server:

Restart the Postgres Server:
$ sudo service postgresql restart


NOTE: If you forgot to stop an existing instance of Postgres before moving the data directory (or for other potential reasons), you may receive an error indicating that “the pid file is invalid, and to manually kill the stale server process.” If this happens, you can manually kill the process with the following:

Manually Kill a Stale Postgresql Server Process:
sudo pkill -u postgres

 

Before restarting the server, it is important to make certain all Postgres processes have, in fact, stopped. Make sure that the shell command ps returns no results when used as follows:

Ensure All Postgres Processes have been Killed:
ps -u postgres

 

Then go ahead and restart postgres as per previous.

You should now be running Postgres in the new location. Note that any existing database files did not move as part of this process, so you will need to follow the previous steps for creating a new super user, etc.

In upcoming posts, we’ll take a look at some of the stand-out features of this database.

Additional Resources and Items of Interest

If you are just digging in to PostgreSQL, Linux, or both, see some of these excellent resources for additional information:

Some OSS projects I’m privileged to be involved with using PostgreSQL:

 

Posted on April 19 2015 07:41 AM by jatten     

Comments (1)

About the author

My name is John Atten, and my "handle" on many of my online accounts is xivSolutions. I am Fascinated by all things technology and software development. I work mostly with C#, JavaScript/Node, and databases of many flavors. Actively learning always. I dig web development. I am always looking for new information, and value your feedback (especially where I got something wrong!). You can email me at:

johnatten at typecastexception dot com

Web Hosting by