From Ruby’s Grape to Martini in Go for Building Web API Server

We have a website that is built by Rails. We also serve a bunch of APIs for customers. The api server is built on Grape, an amazing REST-like API micro-framework for Ruby.  Recently I uses my spare time to learn Golang, a pretty new but fast growing language, and is extremely impressed by it’s simplicity and efficiency. I prefer learn by doing, so I start this experiment to try to re-write our api using Go,  I want to see how hard it is to write code in Go comparing to Ruby.

I googled around to try to find a  web framework in Golang, that is suitable for building an api service, and also easy to start with. And I finally found Martini. I really like it’s simple design of routing, flexible middleware handlers, and smart Injector. In the following paragraph, I’m going to compare grape to martini on how to code a basic version of our api server. The complete source code can be found on github: https://github.com/steventen/grape-vs-martini

Example Requirements

The api server just uses a ‘key‘ param in the query string for authentication. And it responses with a customized json format, like this

{"status": "Success", "data": [...]} # if success
{"status": "Fail", "error_message": "Bad api key"} # if failed

To simplify this experiment, there are only two models: Company, and Project. Company has many projects, and each project belongs to a company. Every company includes a unique api key for authentication. This example only implements two API endpoints:

GET /projects(.json)
GET /project/:id(.json)

Let’s see how we can implement this example on each web framework.

Implementation On Grape

Models

Our grape server is mounted on Rails. In Rails’s ActiveRecord, those two models can be represented as

class Project < ActiveRecord::Base
  belongs_to :company
end

class Company < ActiveRecord::Base
  has_many :projects
end

We also use grape-entity gem to manage model’s fields to be exposed, and alias names to be shown in json response.

module MySite
  module APIEntities
    class Project < Grape::Entity
      expose :id
      expose :name
    end
  end
end

Authentication

Authentication by api key needs to be done before every request, in Grape, you just need to put authentication method into before block, you can always define Helpers to make your code clean:

class API < Grape::API   
  ... ...

  helpers do     
    def current_company       
      key = params[:key]       
      @current_company ||= Company.where(:api => key).first
    end

    def authenticate!
      error!({ "status" => "Fail", "error_message" => "Bad Key" }, 401) unless current_company
    end
  end

  before do
    authenticate!
  end

  ... ...
end

Routes

Grape has very convenient get, post methods, our api routes can be coded like:

class API < Grape::API   
  ... ...

  get "projects" do     
    projects = current_company.projects 
    present :data, projects, :with => APIEntities::Project
    present :status, "Success"
  end

  get "projects/:id" do
    project = current_company.projects.where(id: params[:id]).first
    if project
      {"data" => {"id" => project.id, "name" => project.name}, "status" => "Success"}
    else
      # error!({ "status" => "Fail", "error_message" => "Failed to save project" }, 404)
      { "status" => "Fail", "error_message" => "Failed to save project" }
    end
  end
end

Notice that Grape also provides some great helper methods like namespace, route_param for you to easily create complex routes, and methods like params to do parameter validations. Those features are not shown in this example.

With Rails’ Active Record query interface, this code is fairly simple. In github repo, there are two versions, one is for pure Rack, you can view the source code here. Another version is on Rails, source code is here

Implementation On Martini

Models

Models can be represented by struct in golang:

type Project struct {
  Id   int    `json:"id"`
  Name string `json:"name"`
}

type Company struct {
  Id  int    `json:"id"`
  Api string `json:"api_key"`
}

One nice thing is the tag syntax behind field declaration. Those string tags control how fields are interpreted during json encoding, it equals to the use of Grape Entity’s name alias.

Now we lose ActiveRecord, we can use go’s sql package to do query. sql package provides a generic interface around SQL databases. Similar like Java’s JDBC. For different database, you can add associated database driver without changing your code. There is a great tutorial on how to use sql package. Since we use MySQL, we can import related package as:

import (
  "database/sql"
  _ "github.com/go-sql-driver/mysql"
)

We need methods: 1) get company by api key, 2) get list of projects under certain company, and 3) get project base on project id:

func GetCompany(db *sql.DB, key string) (Company, int) {
  var company_id int
  var api_key string
  err := db.QueryRow("select id, api from companies where api = ? limit 1", key).Scan(&company_id, &api_key)
  switch {
    case err == sql.ErrNoRows:
      return Company{}, 0
    case err != nil:
      fmt.Println(err)
      return Company{}, -1
    default:
      return Company{company_id, api_key}, company_id
  }
}

func GetProject(db *sql.DB, company_id int, project_id int) (Project, int) {
  var (
    id   int
    name string
  )
  err := db.QueryRow("select id, name from projects where id = ? and company_id = ? limit 1", project_id, company_id).Scan(&id, &name)
  switch {
    case err == sql.ErrNoRows:
      return Project{}, 0
    case err != nil:
      fmt.Println(err)
      return Project{}, -1
    default:
      return Project{id, name}, id
  }
}

func GetProjects(db *sql.DB, companyId int) []Project {
  projects, err := db.Query("select id, name from projects where company_id = ?", companyId)
  if err != nil {
    fmt.Println(err)
  }
  var (
    id   int
    name string
  )
  p := make([]Project, 0)
  defer projects.Close()
  for projects.Next() {
    err := projects.Scan(&id, &name)
    if err != nil {
      fmt.Println(err)
    } else {
      p = append(p, Project{id, name})
    }
  }
  return p
}

Authentication

The server code sits inside main function. We first create a martini instance with default settings.

func main() {
  ... ...
  m := martini.Classic()
  ... ...

In order to do our api key based authentication, we can use Martini’s middleware handler. The middleware handler sit between the incoming http request and the router. It can be used to do the same job as the before method in our Grape code. The code looks like this:

m.Use(render.Renderer())

m.Use(func(res http.ResponseWriter, req *http.Request, r render.Render) {
  api_key := ""
  api_key = req.URL.Query().Get("key")
  if api_key == "" {
    r.JSON(404, map[string]interface{}{"status": "Fail", "error_message": "Need api key"})
  } else {
    current_company, company_id := GetCompany(db, api_key)
    if company_id < 0 {
      r.JSON(404, map[string]interface{}{"status": "Fail", "error_message": "Bad api key"})
    } else {
      m.Map(current_company)
    }
  }
})

Notice that, in this code, we use golang’s URL.Query().Get() function for http.Request to retrieve api key within query string. This function is very useful. There is also a ParseQuery() function, which returns a map listing the values specified for each key. Those functions can be very helpful when you want to do POST with some data. For more information, please refer net/url package.

Also in the code above, we include Martini’s render middleware, which helps us rendering serialized JSON responses.

m.Use(render.Renderer())

By using Martini’s inject feature, you can pass render.Render object anywhere you want. And can simply use code like below to generate json response:

r.JSON(404, map[string]interface{}{"status": "Fail", "error_message": "Bad api key"})

One more thing to notice is we also used Martini’s so-called “Global Mapping”:

m.Map(current_company)

In this way, current_company object can be seen globally, and are available to be injected into any Handler’s argument list, you will see the use shortly.

Routes

Routing is not hard. The code is shown as below, very intuitive:

m.Get("/projects", func(current_company Company, r render.Render) {
  projects := GetProjects(db, current_company.Id)
  r.JSON(200, map[string]interface{}{"status": "Success", "data": projects})
})

m.Get("/projects/:id", func(current_company Company, params martini.Params, r render.Render) {
  paramId, err := strconv.Atoi(params["id"])
  if err != nil {
    r.JSON(404, map[string]interface{}{"status": "Fail", "error_message": err.Error()})
    return
   }
  project, id := GetProject(db, current_company.Id, paramId)
  if id > 0 {
    r.JSON(200, map[string]interface{}{"status": "Success", "data": project})
  } else {
    r.JSON(404, map[string]interface{}{"status": "Fail", "error_message": "Project not found"})
  }
})

Notice that current_company object is injected into argument list of handler functions. Besides, we have martini.Params that can be used to get params found in route. Here we use it to find project id, and use strconv package to convert it to integer.

Well, that’s! If you want to run your server, just use:

m.Run()

You can still use go’s original http.ListenAndServe from net/http package:

http.ListenAndServe(":8080", m)

Conclusion and Benchmark

In this post, we focus on three major parts of writing an api server: Auth, Model and Route, and compare between Ruby’s Grape and Go’s Martini implementation. I’m still a beginner of Go, and I just learned it in less than a week, but really feel fun and ease to use it. As it is shown, Martini provides most of what we need to write a simple api server.

Benchmark is not the important part of this post, but I still did ab test with -c 10 -n 1000 on local machine. MySQL is used, sample data contains 10 companies, each with 50 projects, so there are 500 projects in total.

Test environment: Macbook Air CPU 1.7GHz Core i5, 8GB DDR3, OSX 10.9.1
ruby -v 2.0.0p247, go version go1.2 darwin/amd64, rails 3.2.16

Grape On Rack

Concurrency Level: 10
Time taken for tests: 16.277 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 2303000 bytes
HTML transferred: 2211000 bytes
Requests per second: 61.44 [#/sec] (mean)
Time per request: 162.769 [ms] (mean)
Time per request: 16.277 [ms] (mean, across all concurrent requests)
Transfer rate: 138.17 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 23 162 33.9 172 289
Waiting: 17 156 33.2 169 278
Total: 23 162 33.9 173 290

Grape On Rails

Concurrency Level: 10
Time taken for tests: 15.902 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 2492000 bytes
HTML transferred: 2211000 bytes
Requests per second: 62.88 [#/sec] (mean)
Time per request: 159.024 [ms] (mean)
Time per request: 15.902 [ms] (mean, across all concurrent requests)
Transfer rate: 153.03 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 15 158 23.5 170 186
Waiting: 15 158 23.5 170 185
Total: 16 158 23.5 171 186

Go Martini

Concurrency Level: 10
Time taken for tests: 0.900 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 2314000 bytes
HTML transferred: 2211000 bytes
Requests per second: 1110.80 [#/sec] (mean)
Time per request: 9.003 [ms] (mean)
Time per request: 0.900 [ms] (mean, across all concurrent requests)
Transfer rate: 2510.14 [Kbytes/sec] received

Grape on rails uses Puma server, it can get around 63 request/s, where Go server can get 1110 request/s. Go server will be faster with no doubt.

Full source code with sample data can be found at github: https://github.com/steventen/grape-vs-martini

Solved JDBC connection exception

Recently, our filtering worker that I wrote in Java, kept getting connection errors. The worker uses JDBC to connect MySQL server and do some queries. The errors are like this:

[ERROR] Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.

[ERROR] Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

At first I thought it was because there were too many threads that used up all the connections, so I switched to pool system using BoneCP. But the error is still there.

Then I created a simple local cache Class in between to decrease the number of queries, but the error still happens.

To make a long story short, I start to realize that the problem might come from mysql itself.

After some test, I finally got the answer. It was the ‘bind-address‘ setting, just comment it. Then everything works smoothly, and the annoying errors are all gone.

Sharing Sessions and Authentication Between Rails and Node.js Using Redis

Recently, I’m trying to implement a real-time notification feature to our Rails application. Socket.IO from Nodejs is really a good choice, and since I’ven wrote pretty much code using Eventmachine, it’s really not hard to understand it.

Turns out the real ‘bitching’ part is how to share sessions information and do authentication between rails and socket.io. Firstly, you can not directly read rails’ sessions from cookie in nodejs, it’s encrypted.  Secondly, even if you managed to use redis-store  to store sessions into redis, it’ll still fail to get the correct content. As discussed in this pull request , redis-store serialize data using Marshal.dump and Marshal.load , and nodejs is unable to recognize them. Node.js will always try to parse them as JSON. I’ve tried so hard to find a marshal parser in javascript world, and also tried to replace Marshal with JSON in redis-store, but none of them is an easy work and it causes some other troubles to our existing Rails app.

So, finally, after some experiment, here is my workaround. Let Rails store session into cookie as default, no need to change it,  however, I manually create a copy of session data in redis after user sign in, and expire them after user sign out. In order to that, I override two methods provided by Devise inside of ApplicationController:

# app/controllers/application_controller.rb
def after_sign_in_path_for(resource_or_scope)
  #store session to redis
  if current_user
    # an unique MD5 key
    cookies["_validation_token_key"] = Digest::MD5.hexdigest("#{session[:session_id]}:#{current_user.id}")
    # store session data or any authentication data you want here, generate to JSON data
    stored_session = JSON.generate({"user_id"=> current_user.id, "username"=>current_user.screen_name, ... ...})
    $redis.hset(
      "mySessionStore",
      cookies["_validation_token_key"],
      stored_session,
     )
   end
end

def after_sign_out_path_for(resource_or_scope)
  #expire session in redis
  if cookies["_validation_token_key"].present?
    $redis.hdel("mySessionStore", cookies["_validation_token_key"])
  end
end

Basically, when user first signs in, it creates a key called ‘_validation_token_key’ in side of cookies, the value of the key is an unique MD5 string. Meanwhile, this MD5 value is stored into a hash table called ‘mySessionStore’ in redis using ‘hset’. Notice that the corresponding value of this MD5 key in redis is JSONed sessions you want to keep. When user sign out, Devise will call ‘after_sign_out_path_for’ method, and you can delete the hash key inside of redis at that time.

The key part of this code is _validation_token_key stored in cookies, when user connects to Socket.IO, it will try to read this key from cookie, and then fetch the related session value from redis. You can easily use this to do client authentication across rails and nodejs. The example code on Node.js server side looks like this:

var io = require('socket.io').listen(3003);
var redis = require('redis');
var cookie = require("cookie"); // cookie parser
var redis_validate = redis.createClient(6379, "127.0.0.1");

io.configure(function (){
  io.set('authorization', function (data, callback) {

    if (data.headers.cookie) {

      data.cookie = cookie.parse(data.headers.cookie);
      data.sessionID = data.cookie['_validation_token_key'];

      // retrieve session from redis using the unique key stored in cookies
      redis_validate.hget(["mySessionStore", data.sessionID], function (err, session) {

        if (err || !session) {
          return callback('Unauthorized user', false);
        } else {
          // store session data in nodejs server for later use
          data.session = JSON.parse(session);
          return callback(null, true);
        }

      });

    } else {
      return callback('Unauthorized user', false);
    }
  });
});

... ...

After this, if the client passes the authentication, you can easily access session values on ‘connection’ callbacks, like this:

io.sockets.on('connection', function(client){

  var user_id = client.handshake.session['user_id'];
  var username = client.handshake.session['username'];
  ... ...
});

Install Gisgraphy on Ubuntu 12.04 from Scratch

Gisgraphy is a free opensource geocoding and webservices solution. It is a greate alternative to google’s geocoding API, which has lots of limitation on usage. Gisgraphy can provide the best relevance of geocoding, since it combines both geonames and openstreetmap dataset. In fact besides geocoding, Gisgraphy can be used for Reverse geocoding / street search, Street search, Find nearby, Fulltext search, Address parser. I’d recommend you go to their demo site and try it!

Here, I’ll show you how to install Gisgraphy 3.0 on your local machine with Ubuntu 12.04 step by step. I’ll use: Java JDK 7, PostgreSQL9.1 and Postgis 1.5 . ( Notice that Gisgraphy 3.0 does NOT support Postgis 2.0 ). The official site did provide a installation guide, but it is sort of out of data.

1. Install Java SDK

1.1 Install oracle-jdk7

Run the following commands inside of terminal to install jdk7:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-jdk7-installer

You can check if you successfully installed java by running:

java -version # should get java version "1.7.0_21" or something like that
javac -version # should get javac 1.7.0_21 or something like that

1.2 Set up Java environment

Open .bashrc file using vim or any other text editor:

sudo vim .bashrc

Add the following line at the end of file:

export JAVA_HOME=/usr/lib/jvm/java-7-oracle

Reload the settings by running:

source ~/.bashrc

Now you can check to see if the setting is effect by running

echo $JAVA_HOME # should return /usr/lib/jvm/java-7-oracle

2. Install PostgreSQL9.1 and Postgis 1.5

2.1 Install postgreSQL and postgis

Run the following command to install postgresql:

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:pitti/postgresql
sudo apt-get update
sudo apt-get install libpq-dev
sudo apt-get install postgresql
sudo apt-get install postgresql-9.1-postgis

Check if postgresql is successfully installed:

psql -V

2.2 Create password for user: postgres

Enter postgres console with username ‘postgres':

sudo -u postgres psql

Then, inside of postgres, run the following command to change password of ‘postgres':

ALTER USER postgres PASSWORD 'yourpassword';
\q # quite postgres console

2.3 Create database, language and postgis function

All the following command will use the user ‘postgres’ with the password you just created.

# create the database
psql -U postgres -h 127.0.0.1 -c "CREATE DATABASE gisgraphy ENCODING = 'UTF8';"

#create language
createlang -U postgres -h 127.0.0.1 plpgsql gisgraphy

#create postgis function
psql -U postgres -h 127.0.0.1 -d gisgraphy -f /usr/share/postgresql/9.1/contrib/postgis-1.5/postgis.sql

psql -U postgres -h 127.0.0.1 -d gisgraphy -f /usr/share/postgresql/9.1/contrib/postgis-1.5/spatial_ref_sys.sql

After all the settings are done, restart the server by running:

sudo /etc/init.d/postgresql restart

3. Linux file limit settings

To avoid message like “Too many open files” when solr opens a large number of files, you must increase maximum number of files limit. Open terminal, and edit limits.conf file using vim:

sudo vim /etc/security/limits.conf

Add the following 2 lines to the file, notice do not miss the * mark:

* hard nofile 20000
* soft nofile 20000

That’s it, now everything is set up. And the next, we are going to install Gisgraphy server.

4. Insatll Gisgraphy

4.1 Download Gisgraphy

Download Gisgraphy from here.

Open your terminal, go to your directory where the file is downloaded, and unzip it:

unzip gisgraphy-3.0-beta2.zip

mv gisgraphy-3.0-beta2 gisgraphy

4.2 Initialize tables

After that, we need to create tables:

cd gisgraphy/

psql -Upostgres -d gisgraphy -h 127.0.0.1 -f ./sql/create_tables.sql

Then, add default user:

psql -Upostgres -d gisgraphy -h 127.0.0.1 -f ./sql/insert_users.sql

The above command will give two default user one is admin with password admin, the other one is user.

4.3 Settings

In order to make the server run, we need to fill the password of postgres in jdbc.properties file. Inside of gisgraphy directory:

vim webapps/ROOT/WEB-INF/classes/jdbc.properties

Open the jdbc.properties file, and fill the jdbc.password field with your password. Notice that do not leave any space after ‘=’

jdbc.username=postgres
jdbc.password=yourpassword

Then it’s pretty much done. Last thing is we can set up environment inside of env.properties file, which is also under webapps/ROOT/WEB-INF/classes/ directory.

There are 3 parameters that I think is worth to take a look at it. These are:

 importer.geonamesfilesToDownload=US.zip
 importer.openstreetmapfilesToDownload=US.tar.bz2
 googleMapAPIKey=yourkey

For me, I’m only interested about data in USA, so I set geonamesfilesToDownload and openstreetmapfilesToDownload to only download data for US. This will save us a lot of space. What’s more, the googleMapAPIKey can be used to show map in the demo server. You can get it from google’s api console.

All the other settings can be reference from the document.

4.4 Run the server

Ok, now it’s time to run the server. Change the file mode to executable, and then run it.

chmod +x launch.sh
./launch.sh

Now, you should be able to visit http://localhost:8080/mainMenu.html page.  Next thing is go through the wizard in the main page to download the dataset. Yeah!

Linode vs Digital Ocean Performance Benchmarks

Linode has recently increased the CPU from 4 cores to 8 cores, and also doubled memories of all their plans.

To be honest, I really don’t know how 8 cores could be fully used for a website, which just uses their 1GB or 2GB lower plans. I really wish they should have upgraded to SSD disk, I think that’s the real bottleneck.

Digital Ocean is becoming a real competitor, its $5, $10 low price server options with SSD disk make it stands out.

I purchased and benchmarked 3 servers:

DigitalOcean1G: 1 Core CPU, 1GB RAM, 30GB SSD, $10 /month

DigitalOcean2G: 2 Cores CPU, 2GB RAM, 40GB SSD, $20 /month

Linode 1G: 8 Cores CPU, 1GB RAM, 24GB Storage, $20 /month

All servers fresh installed Ubuntu 12.04 x64 server version. Servers from Digital Ocean are all located at New York, whereas Linode server is located at Atlanta. The test script is from ServerBear

Detailed results can be found at the links below:

DigitalOcean1G: http://serverbear.com/benchmark/2013/04/15/77Yz6LDBTg7Iofxu

DigitalOcean2G: http://serverbear.com/benchmark/2013/04/15/0NH1vFxtjGBme8Ze

Linode 1G: http://serverbear.com/benchmark/2013/04/15/BgVR1lhaq7ENOCUA

UnixBench results

DigitalOcean1G:
UnixBench (w/ all processors) 1387.1
UnixBench (w/ one processor) 1386.6

DigitalOcean2G:
UnixBench (w/ all processors) 1873.1
UnixBench (w/ one processor) 1183.7

Linode1G:
UnixBench (w/ all processors) 1860.7
UnixBench (w/ one processor) 491.4

UnixBench gives us an basic score of the system’s performance. I’m really surprised that Linode’s 8 cores didn’t play well as I expect. To give you an idea of how bad it is, below is the test of one of my cheap 4 Cores dedicated server from OVH:

UnixBench (w/ all processors) 4017.1
UnixBench (w/ one processor) 1603.1

At least you can see that’s how it looks, when every 1 core is REALLY 1 core :)

IOPS FIO results

DigitalOcean1G:
Read IOPS 4444.0
Read Bandwidth 17.7 MB/second
Write IOPS 2295.0
Write Bandwidth 9.1 MB/second

DigitalOcean2G:
Read IOPS 3838.0
Read Bandwidth 15.3 MB/second
Write IOPS 2572.0
Write Bandwidth 10.2 MB/second

Linode1G:
Read IOPS 776.0
Read Bandwidth 3.1 MB/second
Write IOPS 624.0
Write Bandwidth 2.4 MB/second

FIO provides a view of system’s I/O performance. Without SSD, Linode did play bad as I expect. But the result is about right at the average level of existing VPSs.

Conclusion

Yes, these raw performance results don’t mean everything. But not everyone can resist the temptation from lower price with better performance.

Validate Attachment File Size and Type in Rails

Upload a file is a common action for websites. And CarrierWave gem provides a very simple and flexible way to achieve that for Rails application.

In realistic situation, you may only allow user to upload files within a limited size or with certain type. For example, only image file type (extension .jpg .jpeg .gif .png) with 5 MB maximum size is allowed. Below, I will show you how to do validations for file size and file extensions.

The validations should be implemented on both front end (client side) and back end (model level). Here I assume you already created  an User model, and a string field :avatar to mount the uploader of CarrierWave, following this guide.

1. Client Side Validation

The front end side (client side) is implemented using jQuery.

First, we need to create a file field inside of your form, using Rails’ file field helper:

 <%= f.file_field :avatar,:onchange =>"validateFiles(this);",
 :data => {
 :max_file_size => 5.megabytes
 }%>

Here, onChange event is triggered when file is selected, and method validateFiles is called. And notice that I create a data attribute “max_file_size” to store the maximum allowed file size information. You can change this value to suit your needs.

Then, we need to implement the validateFiles method. Put the following javascript code inside of your .js file:

function validateFiles(inputFile) {
  var maxExceededMessage = "This file exceeds the maximum allowed file size (5 MB)";
  var extErrorMessage = "Only image file with extension: .jpg, .jpeg, .gif or .png is allowed";
  var allowedExtension = ["jpg", "jpeg", "gif", "png"];

  var extName;
  var maxFileSize = $(inputFile).data('max-file-size');
  var sizeExceeded = false;
  var extError = false;

  $.each(inputFile.files, function() {
    if (this.size && maxFileSize && this.size > parseInt(maxFileSize)) {sizeExceeded=true;};
    extName = this.name.split('.').pop();
    if ($.inArray(extName, allowedExtension) == -1) {extError=true;};
  });
  if (sizeExceeded) {
    window.alert(maxExceededMessage);
    $(inputFile).val('');
  };

  if (extError) {
    window.alert(extErrorMessage);
    $(inputFile).val('');
  };
}

Basically, this code will check if the input file is over the limit and if the input file type is included in the array allowedExtension. Alert window will pop up when validations fail, and the input field will be cleaned.

You can customize the error messages by change the values of maxExceededMessage and extErrorMessage, also you might want to change the allowed file extensions by changing the allowedExtension array.

That’s it for the client side!

2. Model Level Validation

Now, we need to add the same validation on the model level. For file extension check, you just need uncomment the extension_white_list method existing inside of your uploader class:

class AvatarUploader < CarrierWave::Uploader::Base

... ...

  def extension_white_list
    %w(jpg jpeg gif png)
  end
end

Furthermore, the validation for file size is done by Rails custom validator. Inside of User model, add the following code:

class User < ActiveRecord::Base
  ... ...
  validate :avatar_size_validation

  ... ...
  private

  def avatar_size_validation
    errors[:avatar] << "should be less than 5MB" if avatar.size > 5.megabytes
  end
end

And that’s it!

Date, Time, DateTime in Ruby and Rails

There are three classes in Ruby that handle date and time. Date and DateTime are both from date library. And there’s another class Time from its own time library.

Both DateTime and Time can be used to handle year, month, day, hour, min, sec attributes. But on the backend side, Time class stores integer numbers, which presents the seconds intervals since the Epoch. We also call it unix time.

Time class has some limit.  Firstly,  it can only represent dates between 1970 and 2038 ( since ruby v1.9.2, it can represent 1823-11-12 to 2116-02-20 ).  Secondly,  the time zone is limited to UTC and the system’s local time zone in ENV['TZ'].

What’s more, Rails provide a really good time class called ActiveSupport::TimeWithZone. This class is similar as ruby’s Time class, with the support for time zones.

One thing worth to notice is that, Rails will always convert time zone to UTC when ever it writes to or reads from the database, no matter what time zone you set in the configuration file. You can use `<attribute_name>_before_type_cast` to get the original time that store in database. For example ( e.g. created_at):

object.created_at_before_type_cast

Below are some useful snippet code that I use most to deal with date and time.

1. Time

# Get current time using the time zone of current local system
Time.now

# Get current time using the time zone of UTC
Time.now.utc

# Get the unix timestamp of current time => 1364046539
Time.now.to_i

# Convert from unix timestamp back to time form
Time.at(1364046539)

# Use some string format, this one returns => "March 23, 2013 at 09:48 AM"
Time.at(1364046539).strftime("%B %e, %Y at %I:%M %p")

For the time class, I prefer to convert it to unix timestamp, because the integer form presentation can be easily stored, indexed or ordered. Also it can be used in the situation where the distance between two time is more important than the actually time, like in tweets, where it’s better to show ‘1 minute ago’ instead of the actually time.

More time string format can be found at ruby’s Time document.

2. Time with Zone (ActiveSupport::TimeWithZone)

TimeWithZone instances implement the same API as Ruby Time instances.

# Set the time zone of the TimeWithZone instance
Time.zone = 'Central Time (US & Canada)'

# Get current time using the time zone you set
Time.zone.now

# Convert from unix timestamp back to time format using the time zone you set
Time.zone.at(1364046539)

# Convert from unix timestamp back to time format using the time zone you set,
#  and the required string format => "03/23/13 09:48 AM"
Time.at(1364046539).in_time_zone("Eastern Time (US & Canada)").strftime("%m/%d/%y %I:%M %p")

Rails also provides a lot of very useful helper methods, they are using pretty straightforward english format.

# Get the date time of n day, week, month, year ago
1.day.ago
2.days.ago
1.week.ago
3.months.ago
1.year.ago

# beginning of or end of the day, week, month ...
Time.now.beginning_of_day
30.days.ago.end_of_day
1.week.ago.end_of_month

# feel free to use those methods from Time class
1.week.ago.beginning_of_day.to_i

You can find more methods by checking the doc.

Time distance

Rails also provides time distance methods to get the twitter styled time format inside of ActionView::Helpers

# inside of your .erb view files

diff = Time.now.to_i - 1.hour.ago.to_i
distance_of_time_in_words(diff)

distance_of_time_in_words_to_now(1.hour.ago)

Use customized time zone by user

For Rails application, you can set the default time zone under /config/application.rb

# /config/application.rb
config.time_zone = 'Central Time (US & Canada)'

To get a list of time zone names supported by Rails, you can use

ActiveSupport::TimeZone.zones_map(&:name)

Normally, we would like to provide a form for user to choose their desired time zone. You can create a string field (e.g. :time_zone), and the form can be implemented as

<%= f.time_zone_select :time_zone %>

# use US time zone only, with default
<%= f.time_zone_select :time_zone, ActiveSupport::TimeZone.us_zones, :default => "Pacific Time (US & Canada)" %>

To make user’s time zone setting work, we can use the method called use_zone, which override the Time.zone locally inside the supplied block.

To use this method. We can add around_filter inside of  ApplicationController as suggested by railscast, like this

# /app/controllers/application_controller.rb

around_filter :user_time_zone, if: :current_user

private

  def current_user
    @current_user ||= User.find(session[:user_id]) if session[:user_id]
  end
  helper_method :current_user

  def user_time_zone(&block)
    Time.use_zone(current_user.time_zone, &block)
  end

3. Date and DateTime

For most cases, the Time class with the time zone of Rails’ ActiveSupport is sufficient. But sometimes, when you just need a string format of year,  month and day, Date class still worth a try.

For example, in one of my applications,  we use date string as keys to store count information in Redis. To generate a list of date string, I use

# Generate date string in 30 days
days_str = (30.days.ago.to_date...Date.today).map{ |date| date.strftime("%Y:%m:%d") }

Date also comes with a very handy array of day names, that you can easily use it in your drop-down select field.

Date.today.wday # the day of week
Date::DAYNAMES[Date.today.wday] # => "Saturday"
Date::DAYNAMES.each_with_index.to_a #  => [["Sunday", 0], ["Monday", 1], ["Tuesday", 2], ["Wednesday", 3], ["Thursday", 4], ["Friday", 5], ["Saturday", 6]]

# Use it in select field like this
# ...
# <%= select(:report, :day, Date::DAYNAMES.each_with_index.to_a, {:selected => 1}, :class => "form-control") %>

Time, Date, DateTime are all interchangeable by using to_time, to_date, to_datetime methods

# Convert DateTime to Time
DateTime.parse('March 3rd 2013 04:05:06 AM').to_time.class # => Time

# Convert Time to Date
1.day.ago.to_date.class # => Date