Using SemanticMerge with Visual Studio tools for Git

Microsoft recently released an updated version of its Visual Studio Tools for Git. These tools allow to create, branch and merge inside Visual Studio. In this article we will see how to configure Semantic Merge Tool as custom merge tool in Visual Studio.

When we create a new project inside Visual Studio and add it to version control, a new repository is created in our local folder. Visual Studio adds its custom diff & merge tool to the repository configuration, so for using Semantic Merge, we need to modify that.

What we need to do is simply edit the file called configuration placed inside our .git folder with our favourite text editor, add the following text (replace USERNAME with your own user):

[mergetool "MergeTool"]
path = C:/Users/{USERNAME}/AppData/Local/PlasticSCM4/semanticmerge/semanticmergetool.exe
cmd = \"C:/Users/{USERNAME}/AppData/Local/PlasticSCM4/semanticmerge/semanticmergetool.exe\" -b=\"$BASE\" -bn=\"$BASE\" -s=\"$REMOTE\" -sn=\"$REMOTE\" -d=\"$LOCAL\" -a -r=\"$MERGED\" -l=csharp -emt=\"\"\"C:/Users/{USERNAME}/AppData/Local/PlasticSCM4/semanticmerge/mergetool.exe\"\" -b=\"\"@basefile\"\" -bn=\"\"@basesymbolic\"\" -s=\"\"@sourcefile\"\" -sn=\"\"@sourcesymbolic\"\" -d=\"\"@destinationfile\"\" -dn=\"\"@destinationsymbolic\"\" -r=\"\"@output\"\" -t=\"\"@filetype\"\" -i=\"\"@comparationmethod\"\" -e=\"\"@fileencoding\"\"\" -edt=\"\"\"C:/Users/{USERNAME}/AppData/Local/PlasticSCM4/semanticmerge/mergetool.exe\"\" -s=\"\"@sourcefile\"\" -sn=\"\"@sourcesymbolic\"\" -d=\"\"@destinationfile\"\" -dn=\"\"@destinationsymbolic\"\" -t=\"\"@filetype\"\" -i=\"\"@comparationmethod\"\" -e=\"\"@fileencoding\"\"\"

And finally replace the default mergetool:

[merge]
tool = vsdiffmerge (before)

[merge]
tool = MergeTool (after)

After these simple steps, when we try to merge a branch and conflicts arise, we will get this window:

Capture

If we click on “Merge” our Semantic Merge tool will launch.

Happy hacking!.

More info at SemanticMerge.com

Book: Driving technical change

This week I’ve just finished reading “Driving Technical Change”, a book about why it’s so difficult to implement techniques or tools in our work environment, and how to have a better impact. It’s a small, 130 pages long book, so it can be read on a quiet weekend.

The book is divided in four parts:

The first part encourages us to solve the right problem, and for that explains how to specify the problem we want to solve, and how to find a solution, not pushing a tool or technique by itself. We must do some research to find if is there any better solution that works better for our team that the one we are promoting.

The second part lists the most common stereotypes, called “sceptics”, those that may (and will) reject our solution, from the uninformed, to the cynic, the irrational, the boss (management) and more. In this section, for every sceptic we can find a list of possible techniques that can help us to fight that rejection.

The third part is about those techniques, things like having a deep knowledge of the tool or technique we are pushing, know the possible flaws and solutions for them, show a working demo on how it can help your team at day 1, or even prepare an intermediate solution as a bridge between the current status and the status we want to be in, can be very useful to convert the rejection into support. It talks about trust, It’s very important to be honest and create trust, above everything don’t lie to your team.

Finally, the fourth and last part is about strategy, how to approach the easiest to convince members of your team, and with their support, jump into the more challenging ones. It states that sometimes after all the work we may not drive that technical change, but we will always put the first blocks for someone that in a future may finnish it.

Personally, I’ve found it very interesting. If you are trying to push a solution internally and you feel lost, this book may be a starting point.

Link: http://pragprog.com/book/trevan/driving-technical-change

On strings, methods, return variables and IL code

Some days ago reviewing some old code I found out that a method was performing an operation with a string passed by arguments, storing the result in the same variable, and returning it at the end.

It looked weird, so I wanted to know what was really happening, and to check if there is really any difference between smashing the variable sent by argument, creating a new variable or directly returning the call result.

For that reason, I’ve created a small sample project, and later, using ILDasm, saw what was really under the hood. ILDasm is a disasembler for the Intermediate Language created by the CLR when we compile C#.

Just before we start, some quick notes:

  • IL looks like some sort of assembly-like language, in the way that it works with a call stack,  and the result of a function call is stored on the stack before returning.
  • The values are index based, so when we are executing ldarg.0, we really are operating with the value located in the index 0
  • The result of calls to external methods is also saved on the stack.
  • The IL is not the bytecode that will execute, this code is interpreted at runtime by .NET, so the final code result may be sightly different.

And here is the code!

class Program
{
    public static string MyFirstCustomFunction(string a)
    {
        a = a.Substring(4);
        return a;
    }

    public static string MySecondCustomFunction(string b)
    {
        return b.Substring(4);
    }

    public static string MyThirdCustomFunction(string c)
    {
        var result = c.Substring(4);
        return result;
    }

    static void Main(string[] args)
    {
        Console.WriteLine(MyFirstCustomFunction("Lorem ipsum dolor sit amet"));
        Console.WriteLine(MySecondCustomFunction("Lorem ipsum dolor sit amet"));
        Console.WriteLine(MyThirdCustomFunction("Lorem ipsum dolor sit amet"));
    }
}

Let’s start with the first method, if we launch ILdasm from the VS command promt, and we load the executable generated (located in /bin/Debug from our project folder), we will get this image:

ildasm

Here we can see the IL for the first method:

.method public hidebysig static string  MyFirstCustomFunction(string a) cil managed
{
// Code size       16 (0x10)
.maxstack  2
.locals init ([0] string CS$1$0000)
IL_0000:  nop
IL_0001:  ldarg.0
IL_0002:  ldc.i4.4
IL_0003:  callvirt   instance string [mscorlib]System.String::Substring(int32)
IL_0008:  starg.s    a
IL_000a:  ldarg.0
IL_000b:  stloc.0
IL_000c:  br.s       IL_000e
IL_000e:  ldloc.0
IL_000f:  ret
} // end of method Program::MyFirstCustomFunction

What we are watching can be resumed in the following points:

  • At the beginning we define a variable that matches the return type specified in the header. This variable, placed on the 0 position, will contain the return  of the method.
  • Afterwards, we load the arguments in the stack, in this case a single argument.
  • Before calling the substring function we must load into stack the other argument, a 4 byte integer of value 4.
  • Then we call the substring method, specifying both the assembly and the full namespace that contains the String class. The result of that call will be stored back into the stack.
  • After the call we retrieve the stack value and we place it back into the argument variable, replacing the existing object.
  • We read again the value from the argument to the stack and we store in the local variable 0, the return variable.
  • Finally, before returning the function, we place the return variable value on the stack, so it can be accesed from the caller method.

There are some calls like the br and the nop, that are related to how, in debug mode, extra instructions are added to the program for better step-by-step debugging, and there is a discussion on Stack Overflow about the subject, that is linked at the end of the article.

As we can see here, we are loading and storing the same value repeated times, and that may not be necesary at all.

Let’s jump into the second method:

.method public hidebysig static string  MySecondCustomFunction(string b) cil managed
{
// Code size       13 (0xd)
.maxstack  2
.locals init ([0] string CS$1$0000)
IL_0000:  nop
IL_0001:  ldarg.0
IL_0002:  ldc.i4.4
IL_0003:  callvirt   instance string [mscorlib]System.String::Substring(int32)
IL_0008:  stloc.0
IL_0009:  br.s       IL_000b
IL_000b:  ldloc.0
IL_000c:  ret
} // end of method Program::MySecondCustomFunction

As we can see It begins in the same way, but after calling the substring method the result of the method call is stored from the stack to the result variable, with no extra copying and no information smashing.

This looks like a more efficient way of working, because we save an extra Read/Write operation.

Let’s see what happens in the last case, using a extra variable defined inside the scope of the function, what would happen?

.method public hidebysig static string  MyThirdCustomFunction(string c) cil managed
{
// Code size       15 (0xf)
.maxstack  2
.locals init ([0] string result,
[1] string CS$1$0000)
IL_0000:  nop
IL_0001:  ldarg.0
IL_0002:  ldc.i4.4
IL_0003:  callvirt   instance string [mscorlib]System.String::Substring(int32)
IL_0008:  stloc.0
IL_0009:  ldloc.0
IL_000a:  stloc.1
IL_000b:  br.s       IL_000d
IL_000d:  ldloc.1
IL_000e:  ret
} // end of method Program::MyThirdCustomFunction

The first notable difference is in the local variable definition, that defines a second string variable that will hold our intermediate value.

The main difference between here and the first function is that no extra calls to the arguments are done, but, as we are saving the result in a variable before returning it, we have the same double Read/Write problem from the first case.

To sum up, if we directly return the result of a function instead of assigning it to a variable, we will avoid double Read/Write. The third option, while looks interesting, defines another variable, and more memory allocation.

Further reading

Edit:

Semantic merge as the default merge tool with git on Windows

When working on a version control system like git, mercurial or plasticscm, branching and merging are part of our daily work. The merge operation may cause conflicts, and usually we must manually solve them using 3-way merge tools.

By default git won’t provide a tool for this operation, so in this article we will see how to configure an external tool (in our case our Semantic Merge toolsemanticmerge.exe) on git in the Windows platform.

Setting git configuration

One of the most common ways of setting the configuration is by using the git config command, for example:

git config --global core.editor emacs

Will set the default text editor. We have the –global modifier, that saves the configuration at user level, we will talk about the levels in the next section. This operation saves the following data in our configuration file:

[core]
    editor = emacs

Git stores the configuration in a .config file at three different levels:

  • repository level, located inside our local repository at .git\config (git config without extra modifiers)
  • user level, usually located at C:\Users\Me\.gitconfig on windows) (–global modifier)
  • system level, located in our git instalation folder, for example C:\Program Files\Git (–system modifier)

Setting the custom merge tool

Now it’s easy to set the merge tool inside the configuration file. The first thing we need is to define a name for our tool:

[merge]
    tool = SemanticMerge

Afterwards, for the selected tool, we must define the path for the executable that will be launched, inside it’s own section:

[mergetool "SemanticMerge"]
    path = C:/Program Files/PlasticSCM4/semanticmerge/semanticmergetool.exe

Aditionally, we must set a couple more parameters:

[mergetool "SemanticMerge"]
    keepBackup = false
    trustExitCode = false

Finally, we must set the parameters that will be passed to the tool, in this case, the source files,  destination and base.

[mergetool "SemanticMerge"]
    ...
    cmd = \"C:/Program Files/PlasticSCM4/semanticmerge/semanticmergetool.exe\" -b=\"$BASE\" -d=\"$LOCAL\" -s=\"$REMOTE\" -r=\"$MERGED\" -l=csharp -emt=\"mergetool.exe -b=\"\"@basefile\"\" -bn=\"\"@basesymbolic\"\" -s=\"\"@sourcefile\"\" -sn=\"\"@sourcesymbolic\"\" -d=\"\"@destinationfile\"\" -dn=\"\"@destinationsymbolic\"\" -r=\"\"@output\"\" -t=\"\"@filetype\"\" -i=\"\"@comparationmethod\"\" -e=\"\"@fileencoding\"\"\" -edt=\"mergetool.exe  -s=\"\"@sourcefile\"\" -sn=\"\"@sourcesymbolic\"\" -d=\"\"@destinationfile\"\" -dn=\"\"@destinationsymbolic\"\" -t=\"\"@filetype\"\" -i=\"\"@comparationmethod\"\" -e=\"\"@fileencoding\"\"\"

This is the final result:

[merge]
    tool = SemanticMerge

[mergetool "SemanticMerge"]
    path = C:/Program Files/PlasticSCM4/semanticmerge/semanticmergetool.exe
    keepBackup = false
    trustExitCode = false
    cmd = \"C:/Program Files/PlasticSCM4/semanticmerge/semanticmergetool.exe\" -b=\"$BASE\" -d=\"$LOCAL\" -s=\"$REMOTE\" -r=\"$MERGED\" -l=csharp -emt=\"mergetool.exe -b=\"\"@basefile\"\" -bn=\"\"@basesymbolic\"\" -s=\"\"@sourcefile\"\" -sn=\"\"@sourcesymbolic\"\" -d=\"\"@destinationfile\"\" -dn=\"\"@destinationsymbolic\"\" -r=\"\"@output\"\" -t=\"\"@filetype\"\" -i=\"\"@comparationmethod\"\" -e=\"\"@fileencoding\"\"\" -edt=\"mergetool.exe  -s=\"\"@sourcefile\"\" -sn=\"\"@sourcesymbolic\"\" -d=\"\"@destinationfile\"\" -dn=\"\"@destinationsymbolic\"\" -t=\"\"@filetype\"\" -i=\"\"@comparationmethod\"\" -e=\"\"@fileencoding\"\"\"

Now, if we generate a conflict, our semantic merge tool will launch for resolving it, displaying a message like this:

git mergetool
Merging:
base.cs

Normal merge conflict for 'base.cs':
  {local}: modified file
  {remote}: modified file

Hit return to start merge resolution tool (SemanticMerge):

Happy hacking!

Further reading

Firefox OS: First steps

Ver este artículo en castellano aquí

On march 20th I had the oportunity to attend the “Firefox OS App Days” here in Valladolid. The goal was simple, two hours of introductory sessions and a hackathon for having a first-hand contact with the platform.

The Platform

Firefox OS is, at a glance, an Android a linux kernel similar to Android’s core + a web browser, so everything in there, including the start screen and the notifications are rendered in HTML5, so there are no “native apps”. The main difference between this approach and running an app inside an Android or iPhone browser, is that Firefox OS apps will have access to the phone APIs, including, but not limited to, contacts, calendar and other options.

Development: Tools, languages, and the simulator

We can develop a Firefox OS app in the same way we create a web app, with HTML + CSS + Javascript. This means that we can also extend our code using LESS, SASS, jQuery, Sencha, and every javascript framework we want (for my app I used Knockout.js). We only need a Firefox browser for debugging, as the rendering engine that runs on the phone is the same that runs on the browser.

Eventually we will need to test things like the camera, the contacts, notifications, or other API functions that are not available in the standard browser. For these scenarios we can use the simulator, which can be installed as a Firefox extension.

Building blocks: Native interface

As I told early, there are no “native” apps on Firefox, this means that we can use the same styles and elements for lists, buttons, headers and dialogs of the operating system for our own applications, so it integrates seamlessly. Today we don’t have a base app template, so we need to manually copy the CSS files from the Gaia repository (see links at the end of the article) to our app.

My first app

After the introduction we had a hackathon, less than two hours for having a functional app up and running. My first project is a ToDo list, with the following features:

  • The styles are made with the Building blocks, from the Gaia repository. Gaia is the name of the UI for Firefox OS.
  • The animations are made with standard CSS3 transforms and some JS, thanks to the guys of Mozilla and Telefonica I+D for the help.
  • The list is handled with Knockout.js, making it simple to draw a list without manually injecting HTML inside the DOM.

The result is what you see here:

newTask mainWindow

Next steps

This is a Hello World, of course, for this app to be fully functional I would need to save the data, and extend it using the calendar for setting alarms (for example). It may not be the first platform in the future, nor the second or the third, but I think is worth to learn a bit about it.

Links

Introducing GitSync: Now Plastic SCM speaks git

index

 

In Codice Software we spend our days developing Plastic SCM, a source control management system designed for a corporate environment. Our design philosophy is very similar to Git or Mercurial, in the way that our tool is distributed.

Our main difference is on the enterprise features, we adapt to the existing infrastructures using LDAP and Active Directory as authentication methods, and databases that go from our embedded SQL CE or Firebird, to big solutions like Oracle or SQL Server. We also have our own security layer at server level, repository level, or even branch level, so we can assign specific permissions to the different user roles. (i.e. the release branch can only be modified by the dev-ops employees).

We also have a complete and clean user interface, and integration with the most used IDEs on the market, Visual Studio, Eclipse or InteliJ IDEA.

On our effort to push forward our product, we are announcing GitSync, a new component that allows Plastic SCM to synchronize with a remote git server such as Github or Bitbucket. This will allow us sharing code with other developers who use git as source control version, and also use services like Heroku, AppHarbor or Windows Azure right from Plastic SCM UI (or from our command line client).

We are launching a private beta, you can access at this link. We would love having feedback for you, and we have 10 iPhone 5 to our best testers. Do you want to join?

If you want to know more about the company, please visit us at plasticscm.com

Trilo: Travel like a hacker

Ver este artículo en castellano aquí

I love traveling and program curious (but unuseful) things. From time ago I had the idea of creating a travel blog, but creating another account on WordPress.com for “just another travel blog” didn’t looked like the coolest option, so I had a better idea.

What about combining a simple content generation system like Jekyll, some web services a couple links, and a sunday afternoon? I wanted my page to have some specific things:

  • Have a list of visited countries, show the flags, and have the number of visited cities for each countries.
  • For each country, have a map of the different cities.
  • For each city, have a map with the recomended places, and show them in a map.

The result of this idea is Trilo:
trilo_home

The home page is made by two sets of maps, a world scale, and an europe map. Both maps are created using Google Chart Tools.

The flags are obtained using a web service from a two digit code.


This country code will be used also as a category.

trilo_country

In the countries page, we can find the flag that we obtained before, a zoomed map of the cities and a link to the city page.

trilo_city

For each city, we can generate a static map with the Google Maps API from the city name and from recomendations. We also link with TripAdvicor and Wikipedia for more information.

As I wrote before, the system is based on Jekyll, a Ruby written system that allows us to generate static pages from templates, some code and a little bit of magic. It has been interesting for doing a simple mashup with something not related with computing.

The project is avaliable (and hosted) in http://rlbisbe.github.com/trilo/