There is no such thing as a free Free monad

Tags

, , , , , , ,

Disclaimer This article was also published on my company blog.

Random musings on managing state, side effects and decoupling.

Intro

This post supposed to be about DurableTask frameworklibrary and how we use it at FinAi to orchestrate biometric authentication process. In the meantime, I got inspired by Mark Seemann’s writings about “pure interactions”. Go read it – it’s wonderful! I call DurableTask library because it’s very easy to decouple your application logic from it (at least in F#). Yeah!, decoupling is great, isn’t it?  Believe me, you can go astray with decoupling. See for yourself.

Biometric authentication

In essence (and vast simplification) biometric authentication process looks like this:

  • acquire user’s identity document scan
  • verify it and if valid continue
  • acquire user’s face photo
  • verify it with document
  • publish result

Which can be written in pseudo-code:

let verifyUser getDocImage verifyDoc getFaceImage verifyFace publishResult =
    let docId = getDocImage ()
    let (result, docVerificationId) = verifyDoc docId
    if result
    then
        let faceId = getFaceImage ()
        let result = verifyFace docVerificationId faceId
        publishResult result
    else
        publishResult false

We have lots of dependencies (this way it’s almost like using constructor injection in C#) but we can mock & test it for sure. If we want to go asynchronous we need to wrap it with async {} block and add bangs (!) everywhere. But what if process is long running? For example: doc/face verification in some corner cases could be done by “mechanical Turk”. What about saving state? What happens if process crashes? Can we run this code on multiple machines? And so on…

The question is, can we decouple it little more?

Going meta

Biometry language definition (again thanks Mark Seemann for inspiration):

type DocId = Guid
type FaceId = Guid
type DocVerificationId = Guid

type BiometryInstruction<'a> =
| GetDocImage of (DocId -> 'a)
| VerifyDoc of (DocId * (bool * DocVerificationId -> 'a))
| GetFaceImage of (FaceId -> 'a)
| VerifyFace of (DocVerificationId * FaceId) * (bool -> 'a)
| PublishResult of bool // always last instruction -> no continuation

type BiometryProgram<'a> =
| Free of BiometryInstruction<BiometryProgram<'a>>
| Pure of 'a

BiometryProgram equivalent to pseudo-code above:

let verifyUserProgram () =
    Free (GetDocImage (
        fun docId -> Free (VerifyDoc (
            docId,
            fun (result, docVerificationId) ->
                Free (GetFaceImage (
                    fun faceId ->
                        if result
                        then
                            Free (VerifyFace (
                                (docVerificationId, faceId),
                                fun result -> Free (PublishResult result)))
                        else
                            Free (PublishResult result)))))))

Ugly, isn’t it? (or beautiful if you’re Lisp fanatic)

Let’s add some syntactic sugar (most of this code is boilerplate which languages with more powerful type systems – like Haskell – can generate):

// BEGIN: Monadic stuff that Haskell does automatically
module BiometryMonad =
    let private map f = function
    | GetDocImage next -> GetDocImage (next >> f)
    | VerifyDoc (x, next) -> VerifyDoc (x, next >> f)
    | GetFaceImage next -> GetFaceImage (next >> f)
    | VerifyFace (x, next) -> VerifyFace (x, next >> f)
    | PublishResult x -> PublishResult x

    let rec bind f = function
    | Free instruction -> instruction |> map (bind f) |> Free
    | Pure x -> f x

type BiometryBuilder() =
    member __.Bind (x, f) = BiometryMonad.bind f x
    member __.Return x = Pure x
    member __.ReturnFrom x = x

let biometry = BiometryBuilder ()
// END

// shortcuts for instructions
let getDocImage = Free (GetDocImage Pure)
let verifyDoc docId = Free (VerifyDoc (docId, Pure))
let getFaceImage = Free (GetDocImage Pure)
let verifyFace docVerificationId faceId =
    Free (VerifyFace ((docVerificationId, faceId), Pure))
let publishResult r = Free (PublishResult r)

Now we can write BiometryProgram that looks almost exactly like our pseudo-code at the beginning:

let verifyUserProgram () =
    biometry {
        let! docId = getDocImage
        let! (result, docVerificationId) = verifyDoc docId
        if result
        then
            let! faceId = getFaceImage
            let! result = verifyFace docVerificationId faceId
            return! publishResult result
        else
            return! publishResult false
    }

Going down

Very readable but how to run it? We must write an interpreter. Let’s start with basic synchronized version. Instead of taking photos & doing real verification we will generate GUIDs and return true 🙂

module SyncInterpreter =
    let rec interpret = function
    | Pure x -> x
    | Free (GetDocImage next) -> Guid.NewGuid() |> next |> interpret
    | Free (VerifyDoc (docId, next)) ->
        printfn "VerifyDoc %A" docId
        (true, Guid.NewGuid()) |> next |> interpret
    | Free (GetFaceImage next) -> Guid.NewGuid() |> next |> interpret
    | Free (VerifyFace (request, next)) ->
        printfn "VerifyFace %A" request
        true |> next |> interpret
    | Free (PublishResult result) -> printfn "Result is %A" result

Result of running it in REPL:

> verifyUserProgram () |> SyncInterpreter.interpret;;
VerifyDoc 4b8b1115-4f8f-4e31-8c34-22a518064066
VerifyFace (d6db0275-56d2-4bb0-bfdb-37e649efb0f6, e0f007c7-e822-48f2-b72d-df88e2b72823)
Result is true

Now we can go asynchronous without (!) modifying original program:

module AsyncInterpreter =
    let rec interpret = function
    | Pure x -> x
    | Free (GetDocImage next) ->
        async { return! Guid.NewGuid() |> next |> interpret }
    | Free (VerifyDoc (docId, next)) ->
        async {
            printfn "VerifyDoc %A" docId
            return! (true, Guid.NewGuid()) |> next |> interpret
        }
    | Free (GetFaceImage next) ->
        async { return! Guid.NewGuid() |> next |> interpret }
    | Free (VerifyFace (request, next)) ->
        async {
            printfn "VerifyFace %A" request
            return! true |> next |> interpret
        }
    | Free (PublishResult result) ->
        async { do printfn "Result is %A" result }

Result of running it in REPL:

> verifyUserProgram () |> AsyncInterpreter.interpret |> Async.RunSynchronously;;
VerifyDoc 3c45965f-306b-4807-aa75-8f838e5eff67
VerifyFace (4361da1d-3428-48f5-9b8c-b7d016519562, 271be688-e61f-4ac5-85b2-112c0aebc72b)
Result is true

Reaping rewards

At last, let’s use DurableTask to address our reliability questions. This library “allow users to write long running persistent workflows” and in our situation it makes process:

  • Resilient: State will be persisted at “checkpoints” (e.g. image capture, results of doc/face verification) and restored if needed (e.g. after crash or for audit/monitoring purposes)
  • Scalable: Backend leverages Azure Service Bus (or – in new version – Service Fabric) and what started on one node can continue on another

DurableTask is OOP friendly and it hurts my FP eyes but thanks to decoupling we can reuse verifyUserProgram:

// abstract external dependencies serving as "checkpoints" for Orchestration
type IBiometryActivities =
    abstract VerifyDoc: DocId -> Task<bool * DocVerificationId>
    abstract VerifyFace: DocVerificationId * FaceId -> Task<bool>
    abstract PublishResult: bool -> Task<unit>

type BiometryOrchestration() =
    inherit TaskOrchestration<unit, DocId, FaceId, string>()

    let mutable tcs = new TaskCompletionSource<FaceId>()

    // bind operator (fancy Task.ContinueWith)
    let (>>=) (x: Task<_>) f = task { let! x' = x in return! x' |> f }

    let run (activityClient: IBiometryActivities) docId =
        let rec interpret = function
        | Pure x -> x
        | Free (GetDocImage next) -> docId |> next |> interpret
        | Free (VerifyDoc (docId, next)) ->
            docId |> activityClient.VerifyDoc >>= (next >> interpret)
        | Free (GetFaceImage next) -> tcs.Task >>= (next >> interpret)
        | Free (VerifyFace (request, next)) ->
            request |> activityClient.VerifyFace >>= (next >> interpret)
        | Free (PublishResult result) -> result |> activityClient.PublishResult
        verifyUserProgram () |> interpret

    override __.RunTask(context, docId) =
        let activityClient = context.CreateClient<_>()
        tcs <- new TaskCompletionSource<FaceId>()
        run activityClient docId

    override __.OnEvent(_, _, faceId) = faceId |> tcs.TrySetResult |> ignore

Most interesting points:

  • Orchestration process starts when identity document arrives (DocId is an input)
  • Face image (FaceId) can arrive at any time (as an event sent to Orchestration)
  • Because RunTask method can be run many times (replaying all previous events and activities results) we must reset all “state” (TaskCompletionSource) each time
  • Code inside RunTask should be side effects free (BiometryProgram is great fit here) or wrapped with Task activity (that’s the job for interpreter)
  • DurableTask depends on TPL semantics and that’s why TaskBuilder.fs is used instead of default F#’s async (see Tomas Petricek post on C#/F# async differences)

Final thoughts

Surprise, surprise, I didn’t go this route in my production code. Purity and decoupling are great but I value simplicity more. IMHO using such heavy abstraction (counted more in mental operations than lines of code) is not justified for one-off use. What’s your opinion? Anyway I found it interesting enough to write this post. DurableTask definitely deserves post on its own which will come some day. Bye!

How _not_ to upgrade to ASP.NET Core 2.0 just yet (with Paket)

Tags

, , ,

Yesterday I returned from 1 week vacations and found everything broken. I love programming :). What I did:

.paket\paket.exe update

Meanwhile, Microsoft released new ASP.NET Core and after packages update I got broken app with mixed 1.1/2.0 references. I tried to specify version number (“~> 1”) in paket.dependecies file but it didn’t help much. Root references resolved at 1.1.2 (current newest 1.x version) but transient not. That’s because Paket resolves them with highest matching version strategy and ASP.NET Core 1.x packages don’t set upper limit (e.g. < 2) for their dependencies. "Highest matching strategy" is desired behavior most of the time but not in this situation. Luckily, it's configurable globally or per dependency:

-nuget Microsoft.AspNetCore.Hosting
-nuget Microsoft.AspNetCore.Owin
-nuget Microsoft.AspNetCore.Server.IISIntegration
-nuget Microsoft.AspNetCore.Server.Kestrel
-nuget Microsoft.AspNetCore.StaticFiles
-nuget Microsoft.Extensions.Configuration.AzureKeyVault
-nuget Microsoft.Extensions.Configuration.EnvironmentVariables
-nuget Microsoft.Extensions.Configuration.Json
-nuget Microsoft.Extensions.Configuration.UserSecrets
-nuget Microsoft.Extensions.FileProviders.Abstractions
+nuget Microsoft.AspNetCore ~> 1 strategy: min
+nuget Microsoft.AspNetCore.Hosting ~> 1
+nuget Microsoft.AspNetCore.Owin ~> 1
+nuget Microsoft.AspNetCore.Server.IISIntegration ~> 1
+nuget Microsoft.AspNetCore.Server.Kestrel ~> 1
+nuget Microsoft.AspNetCore.StaticFiles ~> 1
+nuget Microsoft.Extensions.Configuration.AzureKeyVault ~> 1
+nuget Microsoft.Extensions.Configuration.EnvironmentVariables ~> 1
+nuget Microsoft.Extensions.Configuration.Json ~> 1
+nuget Microsoft.Extensions.Configuration.UserSecrets ~> 1
+nuget Microsoft.Extensions.FileProviders.Abstractions ~> 1

I 'm going to stay with 1.x until dust settles 😉

Checking for outdated package references during build (with FAKE & Paket)

Tags

, , , ,

Problem statement

Your codebase is growing. Every week brings handful of new microservices. Each one uses (potentially) different languages, libraries, frameworks, data stores etc. The freedom is in the air and everyone is happy but… There is always “but” :).  For solving cross-cutting concerns (e.g. logging, monitoring, authorization…) you have small list of common libraries which need to be in sync. You want at least to know if there are any outdated dependencies.

Solution

Automate checking for outdated references! For a .NET developer dependency is roughly equivalent to NuGet package. The best tool for managing NuGet (as we will see later, not only!) references is Paket. And there is a command for that! All we need is to run it in a build script and potentially break the build if there are some important upgrades.

Details

Output from the paket outdated command looks like this:

Paket version 5.0.0
Resolving packages for group Main:
 - Castle.Core 4.1.0
 - Castle.Windsor 3.3.0
Outdated packages found:
  Group: Main
    * Castle.Core 2.0.0 -> 4.1.0
    * Castle.Windsor 2.0.0 -> 3.3.0
Performance:
 - Resolver: 3 seconds (1 runs)
    - Runtime: 199 milliseconds
    - Blocked (retrieving package details): 1 second (3 times)
    - Blocked (retrieving package versions): 1 second (1 times)
    - Not Blocked (retrieving package versions): 1 times
 - Average Request Time: 884 milliseconds
 - Number of Requests: 12
 - Runtime: 4 seconds

We need to parse it and extract list of outdated packages:

let parseOutput (lines: string seq) =
    lines
    |> Seq.fold (fun state line ->
        match state, line.Trim() with
        | (false, list), "Outdated packages found:" -> true, list
        | (true, list), "Performance:" -> false, list
        | (true, list), line when line.StartsWith "* " -> true, line :: list
        | state, _ -> state) (false, [])
    |> snd
    |> List.rev

Yep, that’s F# code! I use FAKE for my build scripts, so you should because it’s great! FAKE gives me easy domain-specific language (DSL) for build tasks together with power of F#.

Using FAKE’s ProcessHelper I can run paket outdated command and feed output into my function:

let runPaket () =
    ProcessHelper.ExecProcessAndReturnMessages
        (fun psi ->
            psi.FileName <- ".paket/paket.exe"
            psi.Arguments <- "outdated")
        (TimeSpan.FromMinutes 5.) // timeout
    |> fun r -> r.Messages

let outdated = runPaket () |> parseOutput

Given list of outdated packages I want to decide which are important (i.e. break the build). For example, all packages with the name starting with “MyCompany.” should pull the trigger:

let filterByPrefix prefix (msg: string) =
    if msg.StartsWith("* " + prefix) then true else FAKE.trace msg; false

let breakIfAny = function
| [] -> ()  // do nothing
| any -> failwithf "Outdated packages found: %A" any  // break the build

outdated |> List.filter (filterByPrefix "MyCompany.") |> breakIfAny

Now it’s easy to create build step (called Target in FAKE DSL):

Target "Outdated" <| fun _ ->
    runPaket ()
    |> parseOutput
    |> List.filter (filterByPrefix "MyCompany.")
    |> breakIfAny

...

"Clean"
    =?> ("Outdated", hasBuildParam "FindOutdatedPackages") // optional step
    ==> "Restore"
    ==> "Build"
    ==> "Test"
    ==> "Publish"
    ==> "Package"

Postscriptum

Do you remember that Paket manages not only NuGet references? We can use it to share our target with other developers. Just save it as a Gist. Now you can add it as a reference in paket.depencencies file:

...
gist orient-man/c29c299ed970fd097f80124ffde734ce FindOutdatedPackages.fsx
nuget FAKE
...

And in your FAKE script:

#load "paket-files/orient-man/c29c299ed970fd097f80124ffde734ce/FindOutdatedPackages.fsx"

Target "Outdated" <| fun _ -> OutdatedPackages.target "MyCompany."

That’s all folks.

HOWTO: Publish Your reveal.js Presentations on GitHub Pages

Tags

, , ,

Every now and then I need to publish my presentation on the Web and – of course – each time I have to figure it out. So here is my recipe:

  1. I prepare my presentation on the branch in forked reveal.js repo. For example: this is source code for my talk at 4Developers Conference.
  2. When I’m done, I create new repo with a meaningful name (the name will be part of the url) – mytalk is just an example:

    git init
    git remote add origin https://github.com/username/mytalk
    git checkout -b gh-pages
    git remote add reveal.js https://github.com/username/reveal.js
    git fetch reveal.js
    git merge --ff-only reveal.js/mytalk
    git push -u origin gh-pages
    After some time (needed by GitHub to process repo) my presentation will show up at: http://username.github.io/mytalk/

You can also set custom (sub)domain for GitHub Pages:

  1. Add CNAME record via your DNS provider pointing from mytalk.mydomain.com to username.github.io.
  2. Add CNAME file to your repo (on gh-pages branch) containing single line with your domain (mytalk.mydomain.com). See working example: https://github.com/orient-man/4Dev2015/blob/gh-pages/CNAME

Connect the Dots aka ASCIImage in F#

Tags

, ,

Recently I came across a great blog post introducing ASCIImage program. What does it do?

Given:

. . . . . . . . . . .
. . A B . . . . . . .
. . J # # . . . . . .
. . . # # # . . . . .
. . . . # # # . . . .
. . . . . I # C . . .
. . . . . H # D . . .
. . . . # # # . . . .
. . . # # # . . . . .
. . G # # . . . . . .
. . F E . . . . . . .
. . . . . . . . . . .

Then:

chevron

Yay! Now every programmer can be an icon designer …and it makes for decent F# kata.

We need to connect the dots (denoted by letters) and following rules apply:

  • A single isolated dot encodes 1 pixel
  • A sequence of dots (i.e. consecutive letters: A, B, C…) translates to a polygon
  • A dot repeated four times defines an ellipse
  • Finally, a dot repeated two times defines a line

I decided to introduce some minor format changes. I wanted the input format to be self-contained without the need for providing additional code (i.e. blocks in Objective-C version). Just create a text file, run program and voilà, see the result. I choose to support only black (solid) and white (transparent) shapes. Now letters A..Z encode the former and a..z the latter.

Let’s start with type-first approach:

type Dot = int * int

type Shape =
    | Pixel of Dot
    | Line of Dot * Dot
    | Ellipse of Dot * Dot * Dot * Dot
    | Polygon of Dot list

type Opacity = Solid | Transparent

type ParserApi = string [] -> (Opacity * Shape) list

Our goal is to implement ParserApi function transforming ASCII representation into list of shapes. We can start by extracting dots with their positions (string [] -> ((Dot * Opacity) * int) list):

let ascii2dots (arr : string []) = 
    let (|InRange|_|) first last = function
        | c when c >= first && c <= last -> Some(int(c) - int(first))
        | _ -> None

    [ for y in 0..arr.Length - 1 do
          let row = arr.[y].Replace(" ", "")
          for x in 0..row.Length - 1 do
              match row.[x] with
              | InRange 'A' 'Z' idx -> yield (((x, y), Solid), idx)
              | InRange 'a' 'z' idx -> yield (((x, y), Transparent), idx)
              | _ -> () ]

As you can see, F# features like pattern matching, active patterns (|InRange|) and defining list with “yields” ([ for … ]) make for very concise and readable code.

Let’s pretend that we have an active pattern for every rule. Those patterns would examine the beginning of the list and if matched return one or more dots making the shape, its opacity and the tail – list of unmatched dots. Given that we could write parser in just a few lines:

let rec parse dots = 
    [ match dots with
      | Single(p, op, tail) -> yield op, Pixel(p); yield! parse tail
      | Sequence(points, op, tail) -> yield op, Polygon(points); yield! parse tail
      | Quad(points, op, tail) -> yield op, Ellipse(points); yield! parse tail
      | Duo(points, op, tail)-> yield op, Line(points); yield! parse tail
      | _ -> () ]

let api : ParserApi = fun rep -> rep |> ascii2dots |> List.sortBy snd |> parse

Recursive list definition with yield bang – sweet! And now the time has come for remaining patterns.

The dot is considered “single” if the next one is skipped (e.g. A, C, E…) or is the last one. Piece of cake:

let (|Single|_|) = function
    | ((p, op), i1)::(d2, i2)::tail when i2 = i1 + 2 -> Some(p, op, (d2, i2)::tail)
    | ((p, op), _)::[] -> Some(p, op, [])
    | _ -> None

Patterns for dots repeated 4 or 2 times are also straightforward:

let (|Quad|_|) = function
    | ((p1, op), i1)::((p2, _), i2)::((p3, _), i3)::((p4, _), i4)::tail
        when i1 = i2 && i2 = i3 && i3 = i4 -> Some((p1, p2, p3, p4), op, tail)
    | _ -> None

let (|Duo|_|) = function
    | ((p1, op), i1)::((p2, _), i2)::tail when i1 = i2 -> Some((p1, p2), op, tail)
    | _ -> None

Beware, they must be applied in order.

The most difficult pattern is the sequence because we need to collect unspecified number of consecutive dots:

let (|Sequence|_|) dots =
    let wrapResult points tail =
        match points with
        | (_, op)::_ -> Some(points |> List.map fst |> List.rev, op, tail)
        | [] -> None

    let rec collect acc = function
        | (d1, i1)::(d2, i2)::tail when i2 = i1 + 1 -> collect (d1::acc) ((d2, i2)::tail)
        | Single(p, op, tail) -> wrapResult ((p, op)::acc) tail
        | tail -> wrapResult acc tail

    collect [] dots

That’s all – parser is ready. What’s left is to generate bitmap from list of shapes. But drawing is boring: mostly an API driven code. I’ll show you only the type signature (the rest is on GitHub):

type DrawingApi = Parser.ParserApi -> int -> string [] -> System.Drawing.Bitmap

Given a parser, scale and ASCII representation, DrawingApi implementation should return a bitmap. Drawing module depends only on a parser abstraction. In the composition root (aka main) you tie it together:

open System
open System.IO

let ascii2image inputFile scale =
    let asciiRep = File.ReadAllLines(inputFile)
    // Poor man's dependency injection
    let draw = DrawingImplementation.api ParserImplementation.api
    let bitmap = draw scale asciiRep
    bitmap.Save(inputFile.Replace(".txt", ".png"), System.Drawing.Imaging.ImageFormat.Png)

[<EntryPoint>]
let main = function
    | [|inputFile|] -> ascii2image inputFile 1; 0
    | [|inputFile; scale|] -> ascii2image inputFile (Int32.Parse(scale)); 0
    | _ -> printfn "Example usage: ascii2image file.txt"; -1

You can find full source code on GitHub.

Final thoughts

Code above is twice as long as my first attempt. For example, you could inline all the active patterns and make it one complicated recursive function. But I wanted it to be explicit – no comments, the code should speak for itself.

Oh, almost forgot. Let’s draw some familiar logo 🙂

. . H # I A # . . . .
c . K # # d J # # . .
. . . . . b # # # # .
W # # X . . . # # # #
Z # # Y . . . . # # #
A . b . . . . . b # A
R # # S . . . . # # #
U # # T . . . # # # #
. . . . . b # # # # .
f . M # # e N # # . .
. . P # O A # # . . .

devtalk

Exploratory Unit Tests for Ninject

Tags

, , , , ,

I read an excellent book recently, Dependency Injection in .NET by Mark Seeman. Oh boy, I should have read it 2 years ago before I started playing seriously with DI. Dependency Injection looks like an easy concept to grok and what could possibly go wrong? Go figure! Or better read this book. Don’t be misled by title, it’s not only about DI and .NET. Author gives a rather great overview of modern object programming with examples that happen to be in C# but could be in Java or something else.

My DI container of choice is Ninject but unfortunately author doesn’t cover it. I decided to fill this gap and created exploratory unit tests based on code examples from the book:

https://github.com/orient-man/NinjectExploratoryTests

Quote

… is dead

Ostatnio wszystko mi się kojarzy…

Wszystkich w czambuł krytykujemy, zarzucamy im brak poczucia hierarchii wartości, zupełną dezorientację, jeżeli chodzi o plastykę, zamazywania konfliktów i problemów w plastyce istotnych, skrajny prowincjonalizm i nieprawdopodobną malarską ignorancję.

(…) a teorie, że malarz musi być głupi, wówczas były w Polsce nagminne.

(…) byliśmy dla nich naturalnie sztuką przeszłości, skazaną na zagładę. Dzisiaj na zmianę już ci abstrakcjoniści-konstruktywiści są uznani za okopy św. Trójcy przez taszystów, dla których znów konstruktywizm, wcielenie wstecznictwa, zabił sztukę przez zbytni racjonalizm.

Programowanie to sztuka czy inżynieria? Czy inżynierowie dyskutują w ten sposób o budowie mostów?

Wszystkie cytaty z tomu esejów “Patrząc” Józefa Czapskiego.

Quote

TDD By Example: Quotes

Tags

,

From Kent Beck’s book TDD By Example.

We should teach coding in test first manner from the very beginning:

I taught Bethany, my oldest daughter, TDD as her first programming style when she was about age 12. She thinks you can’t type in code unless there is a broken test. The rest of us have to muddle through reminding ourselves to write the tests.

About the role of tester in post-TDD era:

However, if the defect density of test-driven code is low enough, then the role of professional testing will inevitably change from “adult supervision” to something more closely resembling an amplifier for the communication between those who generally have a feeling for what the system should do and those who will make it do.

About the overwhelming fear in pre-TDD era:

Test-driven development is a way of managing fear during programming […]. Fear makes you tentative. Fear makes you want to communicate less. Fear makes you shy away from feedback. Fear makes you grumpy.

Stop it! Learn TDD instead!

Filtering MiniProfiler results with jQuery

Tags

, , , ,

Today I had to optimize a slow running page. We use MiniProfiler which makes finding such issues a breeze. But this time it was a little bit more difficult: MiniProfiler showed over 200 hundred SQL queries. How to find the slow ones? Maybe I miss something but it seems there is no built in way to filter MiniProfiler results. I came up with quick jQuery snippet to do just that:

// show only slow rows (100ms+)
$('.profiler-info div:last-child')
    .filter(function () {
        // all but having 3+ digit times
        return !$(this).text().match(/^\d{3}.*/);
    })
    .parent()
    .parent()
    .hide()     // tr.profiler-odd
    .next()
    .hide();    // tr.profiler-gap-info

Just run it inside browser’s Console window and result should look like this:

filtering-mini-profiler

Git jest git

Tags

W zeszłym tygodniu przeprowadziłem w końcu szkolenie z git’a dla moich “kołorkerów”. Czyrak musiał nabrzmieć. Minął rok odkąd przywlokłem gita z domu do pracy i po cichutku podpiąłem się przez git-svn do centralnego repo. Może i było lekkie zdziwko, skąd nagle 20 commitów w ciągu minuty, ale generalnie to się nie chwaliłem. Nie chciałem spalić tematu i nie czułem jeszcze mocy. Jak już trochę okrzepłem (m.in. dzięki wpisowi Arka), to zacząłem puszczać balony próbne:

  • “Ech, miałoby się git’a to życie stałoby się piękne i nastałaby Światłość”
  • “Odkryłoby się, że istnieją gałęzie i służą nie tylko do robienia rózg”
  • “Bo przecież w svn ich nie ma, prawda?”
  • “Nie trzymałoby się już 2 lub więcej kopii roboczych i nie przerzucało pomiędzy nimi źródłowców jak za króla Ćwieczka”
  • “Nie robiłoby się kopii zapasowych przed svn update jak jakieś zaszczute zwierzę”
  • “Gdyby był git, to historia miałaby sens i już nie powracała jako farsa”
  • “A wymęczony po trzech dniach pracy commit nie zmieniałby 3/4 systemu” (z komentarzem “fixes #666”)

Itp., itd. za te polskie… tysiące. I nawet obiecałem szkolenie, ale minął jeden miesiąc, potem drugi… Aż przyszedł kryzys, nawał roboty, zbliżająca się Ważna Wystawka i już dłużej nie dało rady. Ja tu śmigam jak tarzan po 5 gałęziach jednocześnie a inni się męczą i nawet o tym nie wiedzą. Gorzej! Dobrze im z tym, uśmiechają się, obraza boska po prostu. Więc poszedłem na spontan i zrobiłem szkolenie z marszu na tydzień przed Wystawką.

Postanowiłem skupić się na podstawach a jednocześnie pokazać wszystko (to był chyba błąd). Pierwsze zdaje się zadziałało. Zaczerpnąłem pełnymi garściami z prezentacji “Git For Ages 4 And Up” i tylko podparłem się GitViz’em zamiast tych fajnych kloczuszków. A na koniec miało być pokazanie, że git jest genialny w swojej prostocie także od podszewki – czyli o strukturze repo. Moje nieociosane notatki są dostępne na GitHub’ie.

Podsumowanie

Plusy dodatnie

  • wałkowałem ten graf acykliczny i podstawowe operacje na nim chyba wystarczająco długo (IMHO gdy już to się załapie, to reszta to pikuś)
  • sam chyba też załapałem 🙂
  • nigdzie się nie zaciąłem i git mnie nie zawiódł (zawiodła natomiast pierwsza pierwsza komenda cd katalog, która za każdym razem wieszała ConEmu tak, że pomógł tylko restart)
  • publiczność reagowała żywiołowo i chyba przez większość czasu dyskutowaliśmy (gdy spojrzałem potem na historię poleceń to byłem w szoku, że tylko tyle przez 3h nastukałem)

Plusy ujemne

  • zabrakło agendy – gdybym najpierw powiedział punkt po punkcie o czym zamierzam mówić i dlaczego w tej kolejności to nie musiałbym robić tylu dygresji
  • przykłady zmian i komentarze do commit’ów były od czapy (miałem to przygotować, ale zabrakło czasu i na żywca nic sensowniejszego niż “foo bla bla” mi do głowy nie przychodziło)
  • zabrakło wstępu “po co” przed prezentacją bebechów (.git/objects itp.)

Co dalej

Wygląda na to, że przynajmniej w moim projekcie przejdziemy wszyscy najpierw na git-svn, a potem się zobaczy. Obiecałem już prezentację pt. “Git w praktyce” (git-flow i tym podobne).

PS Plusy ujemne zawdzięczam głównie Michałowi Żmijewskiemu, Adamowi Kruszewskiemu i Hubertowi Trzewikowi – wielkie dzięki!