Showing posts with label Concept. Show all posts
Showing posts with label Concept. Show all posts

Sunday, 14 August 2016

Docker Concepts Plugged Together (for newbies)

Although Docker looks like a promizing tool facilitating project implementation and deployment, it took me some time to wrap my head around its concepts. Therefore, I thought I might write another blog post to summarize and share my findings.

Docker Container & Images

Docker is an application running containers on your laptop, but also on staging or production servers. Containers are isolated application execution contexts which do not interfere with each other by default. If something crashes inside a container, the consequences are limited to that container. There is a possibility to open ports in a container. Such containers can interact with the external world via such ports, including other containers having opened ports.

You can think about a Docker image as a kind of application ready to be executed in a container. In fact, an image can be more than just an application. It can be a whole linux environment running the Apache server and a website to test for example. By opening port 80, you can browse the content as if Apache and the website were installed on your laptop. But they are not. They are encaspulated in the container.

Docker runs in many environments: Windows, Linux, Mac. One starts, stops and restarts a container with docker using available images. Each container has its private file system. One can connect and 'enter' the container via a shell prompt (assuming the container is running Linux for example). You can add and remove files to the container. You can even install more software. However, when you delete the container, these modifications are lost.

If you want to keep these modifications, you can create a snapshot of the container, which is saved as a new image. Later, if you want to run the container with your modifications, you just need to start a container with this new image.

In theory, it is possible to run multiple processes in a container, but it is not considered a good practice.

Docker Build Files & Docker Layers

But how are Docker images created in the first place? In order to create an image, you need to install Docker Composer on your laptop. Then, in a separate directory, you'll create a Dockerfile file. This file will contain instructions to create the image.

Most often, you don't create an image from scratch, you rely on an existing image, for example Ubuntu. This is the 1st layer. Then, as Docker Compose processes each line from Dockerfile, each corresponding modification creates a new layer. It's like painting the wall. If you start with a blue background, and then paint some parts in red, the blue disappears under the red.

Once Docker Compose has finished its job, the image is ready. A Docker image is a pile of layers (in other words). Each time you launch a container, Docker simply copies the composed image in the container for execution. It does not recreate it from scratch.

Docker Volumes & Docker Registry

A Docker registry is simply a location where images can be pushed and stored for later use. There is a concept of version and latest image version. There is a public Docker repository, but one can also install private registries.

A volume is a host directory located outside of a Docker container file system. It is a mean to make data created by a container in one of its directory available in the external volume directory on your laptop. There is a relationship created between this inner container directory and the external directory on the local host. A volume 'belonging' to a container can be accessed by another container using proper configuration. For example, logs can be created by one container and processed by another. It is a typical use of volumes.

Contrary to containers, if a container is erased, the data in its volume directory is never explicitly deleted. It can be accessed again later by the same or by other containers.

There is also a possibility to mount a local host directory into a container's directory. This will make the content of the local host directory available in the container. In case of collision, the mounted data prevails on the container's data. It's like a poster on the blue wall. However, when the local host directory is unmounted, the initial container data is available again. If you remove the poster, that part of the wall is blue again.

But, Why Should I Use Docker?

Dockers brings several big benefits. One of them is that you don't need to install and re-install environments to develop and test new applications, which saves a lot of time. You can also re-use images by building your images on top of giants. This also saves a lot of time.

However, the biggest benefit, IMHO, is that you are guaranteed to have the same execution environment on your laptop as on your staging and production server. Hence, if a developer works under Windows 10 and another on Mac, it does not matter. The mitigates the risk of facing tricky environment bugs at runtime.

Hope this helped.

Saturday, 26 September 2015

Explain React Concepts & Principles, Because I Am Not A UI Specialist

I have been reading React's documentation, but found it to take too many shortcuts regarding the descriptions of concepts and how they related to each other to understand the whole picture. It is also missing a description of the principles it relies on. Not everyone is already a top-notch Javascript UI designer. This post is an attempt to fill the gaps. I am assuming you know what HTML, CSS and Javascript are.

What Issues Does React Try To Solve?

Designing sophisticated user interfaces using HTML, CSS and Javascript is a daunting task if you write all the Javascript code by yourself to display, hide or update parts of the screens dynamically. A lot of boilerplate code is required, which is a hassle to maintain. Another issue is screen responsiveness. Updating the DOM is a slow process which can impact user experience negatively.

React aims at easing the burden of implementing views in web applications. It increases productivity and improves the user experience.

React Concepts & Principles

React uses a divide and conquer approach using components. In fact, they could be called screen components. They are similar to classes in Object Oriented Programming. It's a unit of code and data specialized in the rendering of a screen part. Developing each component separately is an easy task, and the code can be easily maintained. All React classes and elements are implemented using Javascript.

Classes & Components

With React, you will create React classes and then instantiate React elements using these classes. React components can use other React components in a tree structure (just like the DOM structure is a tree structure too). Once an element is created, it is mounted (i.e. attached) to a node of the DOM, for example, to a div element having a specific id. The React component tree structure does not have to match the DOM structure.

No Templates

If you have developed HTML screens using CSS, it is likely you have used templates to render the whole page or parts of it. Here is something fundamentally different in React: it does not use templates. Instead, each component contains some data (i.e., state) and a method called render(). This method is called to draw or redraw the parts of the screen it is responsible for. You don't need to compute which data lines were already displayed in a table (for example), which should be updated, which should be deleted, etc... React does it for you in an efficient way and update the DOM accordingly.

State & Previous State

Each component has a state, that is, a set of keys and values, also called properties. It is possible to access the current state with this.state. When a new state is set, the render() method is called automatically to compute parts of the screen which have to be updated. This is extremely useful when JSON data is fetched with an Ajax. You just need to set it in corresponding React components and let React perform screen updates.

JSX & Transpilation

Creating a tree of React UI components using Javascript means writting lengthy-ish code which may not always be very readable. React introduces JSX which is something between XML/HTML and Javascript. It provides a mean to create UI component trees with concise code. Using JSX is not mandatory.

On the downside, JSX need to be translated into some React-based Javascript code. This process is called transpiling (as opposed to compilation) and can be achieved with Babel. It is possible to preprocess (i.e., pre-transpile) JSX code on the server side and only deliver pure HTML/CSS/Javascript pages to the browser. However, the transpilation can also happen on the user side. The server sends HTML/CSS/Javascript/JSX pages to the browser, and the browser transpiles the JSX before the page is displayed to the user.

That's it! You can now dive into React's documentation. I suggest starting with Thinking In React. It provides the first steps to design and implement React screens in your applications. I hope this post has eased the React learning curve!

Monday, 11 February 2013

Securing A Service And JSP Pages

Securing A Spring Service

When a service is implemented in Spring, it can be secured with the @Secured annotation. It has a parameter where the list of roles can be defined. In order to enable this annotation, one must add the the following line in the Spring Security configuration XML file:

  <security:global-method-security secured-annotations="enabled"/>

A complete example is available here.

Securing A JSP Page

Spring defines its own set of JSP tags to control what is displayed to users. This is achieved with the Authorization tag. The Authentication tag can be used to retrieve user details data too.

Saturday, 24 November 2012

Introduction To Git Concepts

This post is an introduction/reminder to Git concepts. It aims at facilitating the learning curve for those coming from a Subversion (or other) background. For more details, there is nothing like the official book.

Concepts

  • Git operates on repositories which contain a local database of files and corresponding file revisions.
  • Repositories contain files which can have 3 states:
    • Committed - The file is stored in the database.
    • Modified - The file has been modified, but has not been stored in the database.
    • Staged - The file has been  modified and flagged as to be committed in the database.
  • A file can also be considered as untracked by Git. It has no status, until it is added to the working directory. The add function can be used to stage a file.
  • When cloning or creating a Git repository to a local directory, this directory will contain:
    • A Git directory containing the local database and all meta information corresponding to the repository.
    • A staging area file containing information about what will be included in the next commit.
    • The remaining content is called the working directory, which contains files (extracted from the database) corresponding to a specific version of modifications stored in the database.
  • Typically, files are modified in the working directory, then staged for commit, then committed into the repository database.
  • It is possible to ignore files in a repository by marking them as such in the working directory. They will not be stored in the local database.
  • It is possible to remove files from the Git repository, or to move them around in the working directory.

  • When cloning a repository locally, its origin (i.e., the original repository) is registered in the local repository as a remote repository. Several remote repositories can be attached (added) and detached (deleted from) to your local Git repository.
  • One can fetch all data from a remote repository (including branches). This operation will not merge this data with your local work though.
  • One can also pull all data from a remote repository, which is like a fetch and automatic merging.
  • Pushing your repository content to a remote repository is like a pull, but the other way round. All modifications transmitted and merged on the remote repository.

  • One can create tags (of content at a specific version). Eventually, this tag can be annotated with information, such as tag creator, email, etc... It is possible to sign tags too (in a cryptographic way). Signed tags can be verified.

  • One can create branches too. Technically speaking, these are pointers to a specific version in the database repository. The default branch is called Master.
  • In order to remember which branch you are working on, Git has a specific pointer called HEAD.
  • One can use the Git checkout command to switch back and forth between branches. This will update the content of the working directory accordingly.
  • Modifications made to files in different branches are recorded separately.
  • Once a branch has reached a stable level, it can be merged back to Master (or any other branch it came from). Then, it can be deleted.
  • Eventually, a branch can be closed and deleted without merging modified content. This content is lost forever.
  • Branches can evolve independently. If so, a merge operation will first find a common ancestor and create a new version in the Master (or any other target branch). This version will contain both the modifications of the target and merging branches.
  • It is possible to work with remote branches from remote repositories too.
  • Rebasing is about merging the content of a branch back to Master (for example). When there is only one branch, it is not different than a simple merge, except that the log history will not contain the entries (versions) of the rebased branch anymore. This functionality is mostly useful when there are multiple branches created from multiple branches in a Git repository. You may want to merge a branch while keeping alive others having common ancestors.
  • Be careful with rebasing, because if other people were working on that branch (i.e., it is public) or any sub-branches, the continuous integration continuity will be broken for them.

  • Fast Forwarding is the process of moving a branch pointer forward. For example, a branch A is created from Master. Work is performed on A and merged back to Master. The Master pointer may lag behind to an earlier version not containing the merged changes. It can be fast forwarded to the version containing those merged changes.
  • Stashing is the practice of saving unfinished work aside without committing it yet. This allows one to switch branches without committing work in progress.
  • Submodules is a mean to import another Git project into your Git project, but with keeping the commits separated. This is useful when that other project is about developing a library which will be used in several Git projects.

Sunday, 11 November 2012

Introduction To Spring JPA Data Features

This post is a quick introduction to Spring JPA Data's (SJD) features. This Spring module is built on top of the Spring Data Commons module, which is a prerequisite read in order to understand this post.

Features

This module offers several features:
  • JpaRepository<T, ID extends Serializable> - This interface extends the CrudRepository and PageAndSortingRepository interfaces of the Spring Data Commons module. It offers a couple of extra flush, find all and delete operations. See here for an operational example.
  • JPA Query Methods - This is a powerful mechanism allowing Spring to create queries from method names in classes/interfaces implementing Repository. For example: List<Invoice> findByStartDateAfter(Date date); is automatically translated into select i from Invoice i where u.startDate > ?1.
  • @Query - Queries can be associated to methods in Repository classes/interfaces. For example, a method can be annotated with @Query("select i from Invoice i where u.startDate > ?1")
  • @Modifying - This annotation can be used in combination with @Query to indicate that the corresponding query will perform modifications. Hence, any outdated entities are cleared first.
  • @Lock - This annotation is used to set the lock mode type (none, optimistic, pessimistic, etc...) for a given @Query.
  • JpaSpecificationExecutor and Specification - This interface adds a couple find and count of methods to repository classes/interfaces. All, have a Specification  parameter, which add predicates (i.e., where clauses) to corresponding queries.
  • Auditable, AbstractPersistable and AbstractAuaditable - The Auditable interface allows one to track modifications made to an entity (creation, last modification...). The AbstractPersistable and AbstractAuditable are abstract class facilities avoiding the boilerplate code.
  • MergingPersistenceUnitManager - If a developer decides to modularize his/her application, he/she may still want to use a unique persistence unit, even though they are declared in separate XML file. The MergingPersistenceUnitManager solves this issue.
At last, to enable JPA repositories, the:

    @EnableJpaRepositories("com.my.repositories")

should be set on a Java @Configuration class.

More Spring related posts here.

The Authenticated User Concept In Spring Security

A user, in the Spring Security context, is an instance of a class implementing the UserDetails interface. One can use it to check whether:
  • the user account is expired or locked
  • the user is enabled or not
  • credentials are expired or not
As a reminder, authentication requests are managed by an authentication manager delegating these to authentication providers. The laters can be used to authenticate authentication requests.

By default, Spring configures a DaoAuthenticationProvider instance, and registers it in the default authentication manager. The main purpose of this provider is to let software developers choose the way they want to store UserDetails by setting an object implementing UserDetailsService. Such services have one function: load a user's details from its name. That's it! It can be a database, an in-memory database, etc...

If you want to implement your own UserDetailsService, Marc Serrano has provided a detailed example using a JPA Repository which eliminates a lot of the boiler-plate code. Such repositories are part of the Spring JPA Data features.

To implement a customized user and corresponding persistence, see the example available here.

More Spring related posts here.

Saturday, 3 November 2012

Explain Trunk, Branch, Tag And Related Concepts

Trunk, Branch and Tag concepts are relevant to revision control (or version control) systems. These systems are typically implemented as repositories containing electronic documents, and changes to these documents. Each set of changes to the documents is marked with a revision (or version) number. These numbers identify each set of modifications uniquely.

Reminder

A version control system, like Subversion, works with a central repository. Users can check out its content (i.e., the documents) to their local PC. They can perform modifications to these documents locally. Then, users can commit their changes back to the repository.

If there is a conflict between modifications made to documents by different users committing simultaneously, Subversion will ask each user to resolve the conflicts locally first, before accepting their modifications back to the repository. This ensures a continuity between each revisions (i.e., set of modifications) made to the content of the repository.

What is this good for? If the repository is used to store software development code files, each developer can check-out these files locally, and after making modifications, they can make sure these compile properly before committing their modification back to the repository. This guarantees that each revision compiles properly and that code does not contain any incoherence.

In other words, this enables the implementation of Continuous Integration of software engineers' work in the same code base.

Trunk, Branches & Tags

Sometimes, software development requires working on large pieces of code. This can be experimental code too. Multiple software developers may be involved. These modifications are so large that one may want to have a temporary copy of the repository content to work on, without modifying the original content. This is issue is addressed with trunk and branches.

When using Subversion, the typical directory structure within the repository is made of three directories: trunk, branches and tags. The trunk contains the main line of development. To solve the issue raised above, a branch can be created. This branch is a copy of the main line of development (trunk) at a given revision number.

Software developers can check the branch out, like they would with trunk content. They can perform modifications locally and commit content back to the branch. The trunk content will not be modified. Multiple software developers can work on this branch, like they would on trunk.

Once all modifications are made to the branch (or experimental code is approved), these modifications can be merged back to trunk. Just like with a simple check-out, if there is any incoherence, Subversion will request software developers to solve them at the branch level, before accepting to merge back the branch into trunk.

Once the branch is merged, it can also be closed. No more modification on it is accepted. Multiple branches can be created simultaneously from trunk. Branches can also be abandoned and deleted. Modifications are not merged back to trunk.

When the software development team has finished working on a project, and every modifications have been committed or merged back to trunk, one may want to release a copy (or snapshot) of the code from trunk. This is called a tag. The code is copied in a directory within the tag directory. Usually, the version of the release is used as the directory name (for example 1.0.0).

Contrary to branches, tags are not meant to receive further modifications. Further modifications should be performed only on trunk and branches. If a released tag need modifications, a branch from the tag (not the trunk) should be created (for example, with name 1.0.x). Later, an extra tag from that tag branch can be created with a minor release version (for example 1.0.1).

Why work like this? Imagine a software application is released and put on production (version 1.0.0). The team carries on working on version 2.0.0 from trunk (or a branch from trunk). This makes sense regarding the continuity of the code line. Later, one finds a bug on version 1.0.0. A code correction is required. It cannot be performed on trunk since it already contains 2.0.0 code. Tag 1.0.0 must be used. Hence, the need to create a branch from tag 1.0.0.

But what about the code correction created for the 1.0.0 bug? Shouldn't it be included in version 2.0.0? Yes, it should, but since branch 1.0.x cannot be merged back to trunk (it comes from tag 1.0.0), another solution is required. Typically, one will create a patch containing the code correction from branch 1.0.x, and apply it locally from a check out of trunk. Then, this code correction can be committed back to trunk and it will be part of version 2.0.0.

Branches created from tag releases have a life of their own. They are called maintenance branches and remain alive as long released versions are maintained. Contrary to trunk branches, they are never merged back with their original tag. You could consider them like little trunks for tag releases.

Friday, 2 November 2012

Introduction To CSS Concepts

This note is a reminder about CSS concepts, together with links and other useful pages or summaries. CSS stand for Cascading Style Sheets. These sheets are used at the presentation layer to apply styling (fonts, colors, layout...) to HTML documents.

HTML Concepts Reminder

HTML documents contain text and tags (also called markup). The following body of a HTML document contains a paragraph <p> tag:
...
<body>
  <h3>Some header</h3>
  <p class="myClass" >This is a paragraph!</p>
</body>
...
This tag has an attribute called class. Since HTML5, documents are represented as a tree structure (DOM) in browsers, where each tag is a node.

CSS Rule

CSS sheets are electronic documents containing rules. For example:
h3 { color: green; }
A rule contains one or more selector (h3), with a declaration block ({ color: green; }). Declarations are made of pairs of properties (color) and values (green). They are separated by semicolons. The above rule tells the browser displaying the HTML document to find all <h3> tags, and apply the green color to the corresponding content.

Style Types & CSS Reset

By default, each browser has its own style which it automatically applies on HTML documents, unless specific CSS style is declared in the HTML document. There are differences between browsers' default styles. Therefore, identical documents will be displayed differently on different browsers.

The solution is to use a CSS reset style sheet, which eliminates those differences.

There are different types of CSS styles:
  • Inline Style - Styling can be applied directly on HTML markup with the help of the style attribute. It prevails on all other styles in case of conflict, since it is considered the most specific. In the following example, no matter what, the green color will be applied on this paragraph:
<p style=”color: green;”>This is a paragraph!</p>
  • Embedded Style - If you want to apply a style to a whole document, for example, make all paragraphs green, you can used embedded styling with the <style> tag as following:
<!doctype html>
<html>
  <head>
    <meta http-equiv="Content-Type"
      content="text/html; charset=UTF-8">
    <title>My HTML Document!</title>
    <style type="text/css">
      p { color: green; }
    </style>
  </head>
  <body>
    <p>This is a paragraph!<p>
    <p>This is another paragraph!<p>
  </body>
</html>
  • External Style (linked) - This is the most common type of CSS styling. Every CSS rules are stored in separate files and a links to those files are included in HTML document headers:
...
<head>
  <link rel=”stylesheet”
    type=”text/css” src=”style/global.css”>
  ...
</head>
...
It is considered a good practice to separate HTML markup from styling. Hence, using external style sheets is the way to go. However, it creates an issue, how to apply styling to a unique HTML element? The solution is to use id selectors, or class selectors for groups of HTML elements. We will cover this later.

Using @import

@import is CSS command used to import external CSS stylesheet. For example, large CSS files can be split in several sub-files. The main file can use @import to import them. The HTML document only needs to link to the main CSS document:
@import url('/mycss/part1.css');
@import url('/mycss/part2.css');
@import url('/mycss/part3.css');
...
With @import, there is an extra way to import CSS stylesheets in HTML documents:
  
...
<head>
  <meta http-equiv="Content-Type"
    content="text/html; charset=UTF-8">
  <title>My HTML Document!</title>
  <style type="text/css">
    @import url('/mycss/part1.css');
    @import url('/mycss/part2.css');
    @import url('/mycss/part3.css');
    ...
  </style>
</head>
...

Inheritance

By default, when a, inheritable property is applied to a HTML tag, it is applied to all the children elements too. If the following rule is applied:
p { color: green; }
on the following document:
...
<body>
  <p>This is some code: <code>var a = 2;</code></p>
</body>
...
The green color property is applied not only on <p>, but also on <code>. Otherwise, we would have to define a rule for <code> too, which can be tedious if a document has many parent-child relationships.

On the other side, if we want to make sure a child element has the same property value as its parent, we can use the inherit CSS keyword:
code { color: inherit; }
In this case, no matter where <code> is used, the color of the corresponding content will match that of the parent. Not all properties are inherited (border for example).

Selectors

There are several types of selectors in CSS:
  • Universal Selector - Selects all elements in the HTML document:
* { color: green; }
  • Element Selectors - Selects specific elements in the HTML document from there tag types (for example p for all paragraphs <p>):
p { color: green; }
  • Class Selectors - Selects all elements having a class attribute with the corresponding value. Here, "Some text" will be displayed in green:
<p class=”myClass”>Some Text</p>  // HTML

.myClass { color: green; }        // CSS
  • ID Selectors - Selects the element having an id attribute with the corresponding value. Here, the div element will have a width of 500 px:
<div id=”section1”> . . . </div>  // HTML<

#section1 { width: 500px; }        // CSS
  • Pseudo Class Selectors - Pseudo class are, for example, used change the color of links:
a:link { color: green }
  • Child Selectors - For a given element, this selector allows the selection of the children in the DOM hierarchy. The following will select all paragraphs within div sections:
div > p { color: green;}
  • Attribute Selectors - Selects HTML tags having a specific attribute. Here the attribute is title:
[title] { color: green; }
It is possible to apply many classes to an HTML tag (Multi-Classing):
<div class=”class1 cooking”> . . . </div>
One can group selectors to apply common properties, by separating them with commas:
p, h1, #myid, .myclass { color: green; }

Cascading

CSS stands for Cascading Style Sheets. Why cascading? Because multiple style sheets can be applied to the same HTML document before displaying (from highest to lowest priority):
  1. Author Styles - Any inline and embedded styles, together with external style sheets defined in the HTML document by its author.
  2. User Style - Although this is rarely the case, end users can configure a style sheet in their browser.
  3. Browser Style - By default, every browser has its own style sheet to applies if not author or user style is available.
The highest available style is applied.

Specificity

In case of conflicting CSS rules when applying styling, the prevailing rule will be the one having the higher selector specificity, for a given HTML tag.

A CSS rule's specificity is computed using a hierarchy of selector categories:
  1. Inline styling (<p style=”color: green;”>...</p>)
  2. Ids (#myid1, #myid2,...)
  3. Classes, attributes and pseudo-classes (.myClass1, .myClass2...)
  4. Elements and pseudo elements (h1, p, li, ul...)
The rule's selectors must be counted in each category. For example:
h1 em#myid { color: green }
has a specificity of 0-1-0-2 (one id, two elements). 

Inline styling beats all CSS rules. If rule1 has more ids in its selector than rule2, then rule1 prevails on rule2. If they have the same number of ids, then one should repeat the comparison by counting classes, attributes and pseudo-classes, etc...

If two rules have exactly the same specificity, the last one declared prevails. The universal selector * and inherited selectors have a specificity of 0, 0, 0, 0.

The important! Keyword


This is a CSS feature you should probably avoid in most circumstances. Basically, it overrides the specificity process. For example, the following CSS:
h1 { color: yellow !important; }
#myid { color: green; }
applied on the following HTML:
 <p id="myid">Some text!</p>
will display a yellow text, rather than a green text.

The Box Model

It is possible to define boxes for content in CSS. The content has height and width. The padding is the space between this content and its border. The border has a thickness. The margin is the additional space around the border.


The above box model can be declared as following:
.myid {
   width: 200px;
   height: 50px;
   padding: 8px 11px 4px 7px;
   border-style: solid;
   border-width: 9px 1px 3px 6px;
   margin:10px 12px 2px 5px;
}

Float Behavior

The float property allows one to push elements (such as images for example) to the left or to the right. The other elements or text will wrap around. Yet, if you push too many elements to the left, there may not be enough width space on the screen to display them all. The browser will display them on the next line.

Hence, the position of such elements is not fixed. It depends on the size of the browser screen.

Positions

Each element in a HTML document has a position property value:
  • static - This is the default position of all elements. It means the element should be displayed where it should be displayed naturally in the document.
  • relative - This is mean the element should be positioned relatively to its naturally position. One can add 10 pixels to the left, for example. The final position can be impacted by any float behavior.
  • absolute - This allows to position an element at a precise position. This will be relative to the first parent having an absolute or relative position. If none is available, it will be positioned relative to the page itself.
  • fixed - This allows the positioning of an element relative to the display window. It will not move when the page is scrolled up or down, and remain visible.

z-index

If you have several images (or other items) overlapping each other in a HTML document, the z-index will help define which one should be on top and which one should be on the bottom. The higher the value, the closer to the top:
img {
 position:absolute;
 left: 100px;
 top: 100px;
 z-index: 30;
}

Media Queries

Media queries are a means to apply CSS rules conditionally. For example, one can check the available width of the screen and decide to display more or less information. This avoids having to develop separate HTML pages with separate CSS for different devices.

Wednesday, 24 October 2012

Introduction To REST Concepts

Introduction

This post aims at demystifying the REST (Representational State Transfert) web design concepts. REST is based on a client server model. REST is a set of principles describing how standards can be used to develop web applications, for example. Its main purpose is to anticipate on common implementation issues and organize the relationship between logical clients and servers. You could call it a set of best practices!

In practice, REST provides guidance on how to implement web application interfaces to the web. Typically, one says a web application is constructed in a REST-like way or not. REST is often associated (or implemented) with HTTP, but it could be implemented with other technologies too. REST is platform and language independent.

Roy Fielding, the inventor of REST, says REST aims at achieving the following:
  • Generality Of Interfaces - All web applications should implement their interfaces the same way. By sharing the same convention, other applications know how to call yours, and you know how to call theirs. Minimal learning curve for each new application.
  • Independent Deployment of Components - Once an application and its REST interfaces have been implemented and deployed, one must be able to implement or re-implement, and deploy any REST interfaces without having to rewrite or modify existing ones.
  • Encapsulate Legacy Systems - Existing applications which are not implemented in a REST-like way can be wrapped with REST interfaces, making them REST-like applications.
  • Intermediary Components To Reduce Interaction Latency - For example, in order to handle traffic, it is common to distribute user/client requests to several physical servers (which is not to be confused with logical servers). This is transparent for users. Since REST uses interfaces, implementing or adding extra layered components, such as physical servers to handle a peak of client requests, is easy.
  • Emphasizing The Scalability Of Component Interactions - This is complementary to the previous point. 
  • Enforce Security - Exchanging information over the Internet can be risky. Hackers can use it to twist the system. REST principles eliminate many of those risks.

Concepts

  • Resource - A logical resource is any concept (car, dog, user, invoice...) which can be addressed and referenced using a global identifier. Typically, each resource is accessible with a URI when implementing REST over HTTP (for example: http://www.mysite.com/invoice/34657).
  • Server - A logical server is where resources are located, together with any corresponding data storage features. Such servers do not deal with end user interfaces (GUI).
  • Client - Logical clients make requests to logical servers to perform operations on their resources. For example, a client can request the state of the resource, create a resource, update a resource, delete a resource, etc... Clients do not possess resources or corresponding data storage features. However, they deal with end user interfaces (GUI).
  • Request and Responses - The interactions between client and servers is organized with requests from client to server, and responses to requests from server back to client. Requests can contain representations of the resource.
  • Representation - A representation is a document representing the current status of a resource. It can also be the new desired status when a client makes a request to update a resource, for example.

Principles

Here are some principles applicable in REST-like applications:
  • The state of a resource remains internal to the server, not the client - The client can request it, or update it with requests made to the server.
  • No client context saved on the server between requests - The server must not store the status of a client. Otherwise, this would break the scalability objective of REST when reaching a couple million users. Remember that requests can be distributed to several physical servers, which could cause physical resource consumption issues.
  • Client requests contain all information to service it - No matter which request is sent by a client to a server, it must be complete enough for the server to process it.
  • Session states are stored on the client side - If necessary, any information about the status of the communication between a logical server and a logical client must be held on the client side.
  • Multiple representations of a resource can coexist - The chosen format used to represent the state of a resource in requests and responses is free (XML, JSON...). Multiple formats can be used.
  • Responses explicitly indicate their cacheability - When a server returns a response to a request, the information it contains may or may not be cached by the client. If not, the client should make new requests to obtain the latest status of a resource, for example.
  • Code on Demand - This is an optional feature in REST. Clients can fetch some extra code from the server to enrich their functionalities. An example is Javascript.
About session states, implementing a login logout (i.e., authentication) system between a physical server and a physical client requires saving session information on the server side. Otherwise, if it were saved on the client side, it could be hacked from the client side.

There is a general agreement that whatever 'resource' is required to implement authentication between the client and the server is considered out-of-scope for REST. These authentication resources do not have to follow REST principles (see here for more details).

REST Over HTTP

When implementing REST over HTTP, the logical REST client is typically a web browser and the logical REST server is a web server. The REST API (or service) must be hypertext driven.

About resource IDs:
  • The preference is given to nouns rather than verbs to indicate the type of a resource (cat, dog, car...).
  • The unique ID of a resource is a URI, for example: http://www.mysite.com/invoice/34657.
  • A group of resources can also be accessed with a URI, for example: http://www.mysite.com/user/7723/invoices.
It is also considered good practice to use URIs in resource representations when a resource refers to another resource. For example, in a XML document representing a resource:
<dog self='www.mysite.com/dog/923' >
    <name>Lassie</name>
    <owner ref='www.mysite.com/owner/411' />
</dog>

In order to perform operations on resources, simple HTTP is used to make calls between machines. HTTP knows several types of calls: PUT, GET, POST, DELETE, HEAD, CONNECT, PATCH, TRACE and OPTIONS.

However, REST only uses four: PUT, GET, POST and DELETE.
  • GET - Clients can request the status of a resource by making an HTTP GET request to the server, using the resource's URI. REST requires that this operation does not produce any side effect to the resource's status (nullipotent).
  • PUT - Creates a new resource. Since the client does not know the next invoice number, the URI can be: http://www.mysite.com/invoice. If the resource is already created, it is not recreated. In other words, a REST PUT on http://www.mysite.com/invoice/841 (for example) is (and must be) idempotent. Invoice 841 must not be created multiple times if clients call that PUT several times.
  • POST - REST requires POST client requests to update the corresponding resource with information provided by the client, or to create this resource if it does not exist. This operation is not idempotent.
  • DELETE-  This operation removes the resource forever. It is idempotent.
REM: Implementing a http://www.mysite.com/invoice/add  URI is not considered a REST compliant pratice.

The format (JSON, XML...) used to return representations of resources is set in the media type of the server response (Multipurpose Internet Mail Extensions - MIME).

In order to handle success or errors issues, HTTP REST recommends using one of the HTTP Status Code.

Additional Read

Wednesday, 12 September 2012

Quick Introduction to SiteMesh Concepts

Sitemesh is a small framework facilitating the development of harmonious layouts of web pages across web applications. Instead of maintaining look-and-feel code in each page, this code is grouped in templates. When a page is rendered, its content is extracted and wrapped into a template to generate the final page (decoration).

Sitemesh is integrated with the Servlet technology via filters. Decorators (i.e. templates) are JSP files declared in a WEB-INF/decorators.xml file. These decorators rely on a specific JSP tag library:
<%@ taglib uri="http://www.opensymphony.com/sitemesh/decorator"
        prefix="deco" %>
These tags are used in the templates to extract the content of the page to be decorated. For example:
  • <deco:body /> - Extracts the body content of the page to decorate
  • <deco:head /> - Extracts the head section of the page to decorate
  • ...
Sitemesh is configured with the WEB-INF/sitemesh.xml file. This is where the page parser and mappers to decorators are defined. Sitemesh comes with a set of decorator mappers (localization, browser compatibility...) to solve common requirements.

For more details about Sitemesh, see here.

Friday, 31 August 2012

Introduction to Spring Security Concepts

Spring security is a complex subject, with a steep learning curve. The purpose of this post is to try to reduce that learning curve and to be a reminder of the main security concepts every developers should master in order to configure security in Spring applications.

It is a not a substitute to reading the official documentation, especially the Spring security appendix describing the nuts and bolts of configuration elements. The Spring security 3 book is also helping connecting dots between concepts.

Facts

  • Authentication and Access Authorization – The main philosophy of Spring security is first to authenticate users, then to check their access credentials to resources. This is performed via a set of filters.
  • Annotation based configuration – So far, there is no such thing as Java based configuration with Spring security. Everything is based on XML configuration. You will not get rid of security.xml with Java programmatic configuration.
  • <servlet-name>-security.xml – One must specify the location of the security configuration XML file either in the contextConfigLocation parameter value (*) of web.xml or, if one uses configuration annotations, it can be imported with @ImportResource (**).
  • web.xml – One must configure the Spring security filter chain in the web.xml file (***) together with a filter mapping configuration to enable the Spring security.
  • Security Annotations - It is possible to enable JSR-250 annotations or Spring's @Secured annotations. Typically, these are used on services objects for access control.

Concepts

  • Auto-Config – A configuration element of the Spring Security namespace enabling (or not) the default configuration of Spring Security.
  • Access Decision Manager – Decides whether a user can access a resource or not.
  • Authentication Manager – It processes authentication requests via child authentication providers.
  • Authentication Provider – Depending on its accepted types of authentication requests, it processes them for approval or not.
  • Delegating Filter Proxy – A servlet filter capturing every user requests and sending them to the configured security filters to make sure access is authorized.
  • Principal – Represents anyone authenticated.
  • Provider Manager – An authentication manager instance processing authentication request through a list of authentication providers.
  • Security Context – A user's secured authenticated session. It is stored in a security context repository.
  • Session Authentication Strategy – Should a user have a session? Should we retrieve any existing ones? Should we automatically create one at each login? How many sessions can a user have? What about timeout? What's the strategy for session handling?

General Scheme

For a secured page, the general functional behavior of Spring security is the following:
  1. A user makes a request for a secured page.
  2. The configured Authentication Manager checks the user credentials.
  3. If necessary, the user provides them (for example, login and password).
  4. If the credentials are not validated successfully, access to the page is refused.
  5. The configured Access Decision Manager then makes sure the identified user has the right to access the page (i.e. has proper authority).
  6. If the authority is not established, access to the page is refused.
  7. Else, the page is displayed.

Security Filters

Every user requests pass through a set of filters. The Authentication Manager and the Access Decision Manager are both filters, functionally speaking. When auto-config is enabled, a set of defaut Spring security filters is automatically configured.

Here is how user queries are processed in more details:
  1. When a user makes a request, Spring loads its security context.
  2. If the user's request URL is the logout URL (by default /j_spring_security_logout), the user is logged out.
  3. If the user's request URL is an authentication form submission (by default  /j_spring_security_check), an attempt to authenticate the user is performed.
  4. If no login page is configured, a default login page is displayed (if the user is not authenticated yet).
  5. Checks whether the request has an Authorization header. If yes, user name and password is extracted for authentication. If authentication is successful, it is registered in the security context.
  6. Assuming a user was trying to access a page requiring authentication, this step retrieves the original request to that page, if the authentication is successful.
  7. The user request is wrapped together with the security context into a single object.
  8. If the user has not been authenticated successfully so far, it is flagged as anonymous.
  9. If the user has been authenticated, the session authentication strategy is applied. 
  10. Any AccessDeniedException and AuthenticationException thrown by any of the above are handled here.
  11. Delegation of authorization and access control decisions to an access decision manager.

Security Namespace

Spring security is defined in an XML document, just like maven configuration is defined in a pom.xml file. It has a namespace (i.e., a set of XML tag elements) which can be used to activate or configure Spring security features. Again, read the Spring security appendix to learn about these in details. It is a must to understand Spring Security.

Some of its main elements are:
  • <html auto-config='true'> - It is the parent element of web related configuration elements. It creates the filter chain proxy bean called springSecurityFilterChain for security. It has an auto-config attribute, which can be set to install the default Spring security configuration elements.
  • <access-denied-handler> - Can be used to set the default error page for access denials.
  • <intercept-url pattern="/**" access="ROLE_USER"> - This element creates a relationship between a set of URLs and the required access role to visit these pages.
  • requires-channel - This is an <intercept-url> attribute which can be used to require the usage of https to access a set of URLs (i.e., secured channels).
  • <form-login> - Can be used to define the login page URL, the URL for login processing, the target URL after login, the login failure URL, etc...
  • <remember-me> - If a user is not authenticated, and 'remembered' information about the user is available (for example, from a cookie), it will be used.
  • <session-management> and <concurrency-control> - To implement session management strategies.
  • <logout> - To configure the default logout page.
  • <http-firewall> - To implement a firewall filter.
  • <authentication-manager> - A required configuration element. It creates a provider manager. The child elements are <authentication-provider>.
  • <authentication-provider> - Can be used to create an in-memory authentication provider. The children <user-service> and <user> elements can be used to define user login-password combinations. Other types of authentication providers can be configured too.
  • <password-encoder> - If the users' login and password are stored in a database (for example), one can use this configuration element to specify how the password should be encrypted.

For a concrete Spring Security example, click here • More Spring related posts here.

REM: This blog does not cover all Spring Security features. Topics such as: password encryption, storage of credentials, 'Remember Me', SSL connections, sophisticated access control, OpenID, LDAP, Client Certificate Authentication in the Spring Security 3 book in details.

------------------------------------------------------------

(*)
<context-param>
  <param-name>contextConfigLocation</param-name>
  <param-value>
    /WEB-INF/myApp-security.xml
  </param-value>
</context-param>

(**)
@Configuration
@ImportResource("classpath:my/package/security.xml")
public class ApplicationConfig {

    // ...

}
(***)
<filter>
  <filter-name>springSecurityFilterChain</filter-name>
  <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
  <filter-name>springSecurityFilterChain</filter-name>
  <url-pattern>/*</url-pattern>
</filter-mapping>

Thursday, 10 May 2012

Introduction To Maven Concepts (Crash Course)

This is the post I wish someone had written to connect dots in 2007, when Maven documentation was scarce. This Maven tutorial is a starting point for those who have no experience with Maven, or who are willing to improve their global understanding of Maven concepts. It explains what Maven is, what you can do with it, but not how to do it. I will also give pointers to additional documentation.

What is Maven?

Maven is a software project implementation tool, but not a project management tool. In a factory, it would be the complete assembly line, taking in raw materials and producing finished products ready for use. It is mostly used by Java programmers, but it can be configured for other programming languages too.

If ever you have to work on a large Java project, you will most probably have to learn either about Maven or Ant (another project implementation tool). Otherwise, your project will quickly become unmanageable as it grows in size. Ant requires you to configure all your requirements from top to bottom. You have to build the whole assembly line for a given project. On the other side, Maven comes with default behaviors. If there is something you don't like in the assembly line, you can reconfigure it or extends parts to meet your specific needs.

There has been some religious wars between Ant users and Maven in the past. But today, they coexists in peace. In fact, it is pretty common for a Maven project to call Ant as a module to perform tasks it cannot perform itself.

Both Maven and Ant are functionally interchangeable. If an existing project works fine with Ant, you should not convert it to Maven. However, if you start a project from scratch, you may want to consider Maven, because of the very large support community and an incredible number of mature modules available for free under an open source license.

What Is The Form Factor?

Maven is a package you download and install on your PC. You just need to unzip it in a directory and make sure it is accessible from the command line (i.e., it has to be 'in the path'). Maven is a command line product with no graphic user interface (GUI). However, it is well integrated in free software development applications such as NetBeans or Eclipse. Most often, you will never need to run Maven from the command line. If you plan to use Ant with Maven, you will need to install it on your PC too.

How Does It Work?

When you start using Maven, it will first create a local repository on your PC. As you compile your software projects, all the results are posted in this local repository. The produced items are called artifacts. You can configure Maven to post those artifacts in other remote repositories or locations too, if necessary.

In addition to this local repository, there is what is called the central repository. It is a huge online repository containing tons of free artifacts contributed under an open source license over many years by thousands of developers. When Maven needs one of these artifacts to build a project and can't find it the local repository, it tries to fetch it from the central repository, if the local PC is connected to the Internet. Maven can be configured to search those artifacts in other public or private repositories too.

As a consequence, the first time Maven builds a project, it will download many artifacts from the central repository. It is a one time operation. You are not allowed to post something directly to the central repository. If you want to contribute your own artifacts, you need to read this.

There are three main types of repositories, the central repository, your local repository and public or private proprietary repositories.

What Is A Maven Project?

Contrary to many other software implementation tools, Maven projects operate according to a standard directory structure where different items (code files, test files, etc...) are expected to be found in well-known directories. It is part of the delivered assembly line.

It is not a good idea to try to reconfigure this directory structure to fit your karma. If another software engineer gets to work on your project, he will be confused. On the other side, if you get to work on another Maven project, you will be happy to find what you are looking for where it is supposed to be. Learn about Maven directory structures. Don't be a baby, open your mouth and do take the pill (lol)!

Another key project item is the Project Object Model XML file often called 'pom' or 'pom.xml'. Each project has its own directory structure with a pom.xml located at its root. Here is one the simplest possible pom.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>


  <groupId>com.mycompany</groupId>
  <artifactId>SimpleMavenProject</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>pom</packaging>

</project>

It contains the 4 items making the coordinates of the project. Basically, it is the location of the project in repositories; a concatenation of the group id, the artifact id, and the project version (for a definition of SNAPSHOT, see (*) below) together with directory separators. The packaging explains how and which default artifacts will be bundled into the repository. You can organize and identify your projects as you wish using the coordinates you want. The packaging depends of the type of project your want to create.

The POM is where you will configure the different parts of the assembly line to meet your needs if default configuration is not enough.

Typically, under the hood, all Maven projects are created using an archetype. It is a kind of template used to create the default content of POMs and the default directory structure of Maven projects. You can create your own archetypes if you need to, but existing ones will cover most of your needs.

How Does The Compile Process Work?

The compile process is called the build process in Maven. It can be configured to perform much more than a simple compilation. Therefore, you should avoid mentioning a 'compile process' when talking about Maven, because it is only one of the phases of one of Maven's build processes (one step in the assembly line).

When a build process is executed, it goes through a life cycle using the information contained in the POM. Each life cycle is made of phases. Plugins are attached to phases. A phase can contain multiple plugins. A plugin is basically a set of operations of a given type which may be executed during a build cycle. Each possible plugin operation is called a goal. When a plugin is attached to a phase, a goal is specified (or else the default goal is executed).

The life cycle goes through each phase, sequentially, and executes each default (or configured in the POM) plugin goal, sequentially, until the end or until one of the plugin goal fails to execute. A Maven goal (as opposed to a plugin goal) is a step in the life cycle of a build process. By default, a build process will go the whole way through, unless a specific Maven goal is specified. In this case, Maven stops at this life cycle step, even if it is executed successfully.

By default, some plugins are attached to phases. Each plugin has a default goal which will be invoked unless it is configured differently in the POM. This is the default assembly line. You can also attach additional plugins to the phases of your build cycles. These will be downloaded from the central repository if necessary. You can also write your own plugins if you need to and use them in your build cycles.

Think about all this in terms of Lego's attached to a structural frame.

What About Maven Dependencies?

Large projects often use existing pieces of code (libraries) by making an explicit reference to them. In Maven projects, such dependencies to existing artifacts are specified in the POM, using their coordinates. Sometimes, these dependencies are just required for the build process (for the testing phase for example). Sometimes, they need to be shipped as part of the delivered artifacts. The type of use of the dependency is called a scope. It is specified in the POM

What About Profiles?

In order to cover complex situations (for example) when you need to create different types of artifacts for different target platforms, it would be tedious to maintain multiple POMs for the same project. The solution is called build profiles in Maven.

This is a mean to set additional configuration if the build process is to be executed for a specific platform (building for Windows or for Linux for example). In this case, corresponding plugins are only executed if they have been defined in the corresponding platform profile in the POM. A POM can be executed by specifying (or not) a build profile. This determines the set of plugins which will be executed on top of the default configuration.

Conclusion

There are many other features available in Maven. We have only mentioned the basics. If you want to learn more about Maven, including how to use it, the best resource available on the net is the Maven Complete Reference online guide.

After reading this guide, you should explore existing Maven modules. Learn what they can do for you. If you need help, type your question in Google followed by the word 'StackOverflow'. It most probably has already been answered. If not, go to StackOverflow.com and ask your question there.

-----

(*) SNAPSHOT versions can be confusing at the beginning. Let's assume you are working on a project and have released version 1.0.0. You are now working on version 1.1.0, but it has not been released and won't be released until you are done. Version 1.1.0 is work in progress. Yet, with Maven's build process, you are creating temporary work in progress artifacts. In order to differentiate these from production ready artifacts, you can add -SNAPSHOT to the work in progress version of your project (i.e.,  <version>1.1.0-SNAPSHOT</version>).

When you create Maven projects with dependencies to other artifacts/libraries, by default, maven ignores SNAPSHOT versions of those dependencies. Yet, if you want to access a specific SNAPSHOT version of a dependency, you can explicitly specify it in your pom.xml. This is often necessary during the development phase of a project.

For Maven parent projects, see here • More about Maven tips and tricks here.