Thursday, December 14, 2006

Convincing Prospective Employers that your Delphi skills are relevent to C# / VB / .Net

So you know Delphi and find yourself in the job queue again. The Delphi job postings are much less than last time you were looking, and some of them are for converting to .Net. How do you go about crosstraining to .Net, and convincing prospective bosses that your skills are transferrable ?
  1. One answer lies above; look for jobs involving converting Delphi projects to C# (or Java, if that interests you). They'll value your Delphi skills and pay you to learn another language.
  2. Look into certification. There is a whole industry around training towards MCSD and MCAD, both at home with books, or via training courses. In my experience, books are a far more effective and much cheaper way to learn anything substantial.
  3. Provide a Delphi to C# comparison with your resume.
  4. At least start working towards MCSD, and say that you are in interviews and in your resume. That sounds like you are serious and approaching this crosstraining in a professional manner. Also, I would guess that whether you are 10% or 80% towards finishing would make little difference to many employers.
  5. Start a real world project eg. get involved in an open source project, or build an ASP.Net website. I started this with www.seekdotnet.com and was impressed with their SQL Server package features and price, but can't speak for the quality of their service as I haven't launched my site yet. Again, mention this in your resume and in interviews.
  6. A website project (more than a Windows Forms project) is particularly impressive as they can very easily try it out and see that you're capable of creating something real.
  7. Read books about resume writing eg. "What Colour is Your Parachute" is the classic. Spend days on your resume. A day spent on it could equal a month worth of waiting for job ads to appear, waiting for them to get back to you etc.
  8. Research companies you would like to work for. Microsoft provides a listing of "Partners" on its website, which is a pretty complete list of .Net shops in your area. Get a directory of your local "technology park" or precinct. Look for government innovation development programs, and the list of companies that may provide.
  9. Don't just wait for job ads to appear. Print out your resume, dress up, and approach them. My theory here is that when job ads are written, critieria is decided upon based on their ideal employee with phrases such as "12 months .Net experience". Immediately you are behind with your measley 2 weeks of .Net experience. When are face to face with them in their office however, you are a real person and they can judge your character, enthusiasm etc and may even make a position for you that didn't previously exist. They may not have time to go through with the hassle of advertising when really they need you, or they may be just about to advertise. I did this for about 4 days and got one 4 week C# contract followed by a job offer, 1 call encouraging me to apply for a new position and one email asking me if I was still looking for work, 4 weeks later. This was after 3 months of answering job ads with little response. By the way, my new job still came from answering an ad.
  10. Keep all job ads that interest you, even ads for the wrong job but the right sort of company. They may be worth approaching later, and may contain important info such as the name of a manager to call. Having someone to ask for is an easy way to get past a difficult receptionist. Also, if you were the second best candidate this time, you might get the job next time. Resist the urge to resent that they didn't choose you.
  11. Companies using Delphi, past and present are still your friend. They will be easy to convince that your Delphi skills are valuable, even if they no longer use it. My 4 week C# contract was with a previosly Delphi based company, and my current boss has done some serious Turbo Pascal work in the past.
Good luck !

Friday, December 08, 2006

Alternative to SUBST: Local Network Shares

Over the past few years I have set up 2 software teams with associated tools, file structures and processes in different companies.
Of high importance is the structure of the respository in the Version Control System.

Both times I have been aiming for :
  1. ability for the developer to have multiple local working copies, and flexibility to locate their working copies in any folder on any drive.
  2. consistency across all developer machines
  3. ability to use and version control the files of any tool, and have those files be usable by other developers
  4. all necessary files under a single root.

Currently we are working with Subversion and TortoiseSVN, NAnt and CruiseControl.Net. Our code is in C# and C++.

The above has been achieved and working well using the Windows SUBST command to create a drive R: (for repository) under which all files are stored. Under this the folders are projects, tools, thirdparty, total (for the build) and rnd (for Research aNd Development).

When I upgraded to V1.4 of Tortoise however, the icons indicating file status (up to date, modified, added etc) began behaving strangely. On reading the Tortoise list (subject: "TortoiseSVN Bug with overlay icons on network drives") I see from Stefan Küng gmail.com> :
> The status cache can't work reliably with SUBST drives! The cache works
> by monitoring the filesystem for changes. Every change fires an event
> which the cache catches and acts accordingly. But if you have a SUBST
> drive, then even though you have two (or more) paths that point to the
> very same location on the filesystem, only one event is fired. Which
> means you will *always* get unpredictable result
Later    dfa.com> says :
Just a suggestion, but have you tried loopback mounting a network drive?
Share the folder you create a subst of as a private share (end the share
name with $), read/write only by the user (or read/write only by the
machine, if you prefer), and mount that network drive as another drive
letter. I used to use subst too, but found that too many programs broke
with it. Never had a problem with the loopback network drive.
Hey! I never thought of that !

So I tried it, and hit the next problem : .Net's Code Access Security. Any mounted network share (even from your own machine) is treated as of the "Local Intranet" security zone, which is by default of "medium trust". This isn't good enough for some things, such as NAnt and Visual Studio.

After further digging, and thanks to the references below I now have my local share R: drive fully trusted, without affecting the rest of the local intranet zone.

Here's the answer :
  1. Share the folder you want to mount
  2. Map a drive letter to it.
  3. execute the following lines in a dos shell or batch file :
C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\CasPol.exe -q -pp off -machine -addgroup 1 -url file://R:/* FullTrust -name "Drive_R" -description "R: Local Network Drive"
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CasPol.exe -q -pp off -machine -addgroup 1 -url file://R:/* FullTrust -name "Drive_R" -description "R: Local Network Drive"
each line adds the policy for a different version of .Net. Each version's security operates independently.

Now it just needs a batch file to be able to do :

MapDrive R: D:\Repos\DriveR

Maybe another day...

References
  1. Using CasPol to Fully Trust a Share
  2. How Do I...Script Security Policy Changes?
  3. Getting CLR Security Right - Seeing Double

Monday, September 11, 2006

Firing current buffer as a Rails test in SlickEdit

I've started a new adventure building a fairly large distributed database driven website using Ruby on Rails and eventually Adobe Flex for a pretty front end(this is *very* cool combination). I've got SlickEdit 11 as my IDE which now supports both Ruby and ActionScript for Flex - nice timing there.

Anyway, what inspired this post was attempting to use my RnD-style development with Ruby's Test::Unit, which implies using Rake. I strongly believe that it is a major productivity boost to be able to edit code, run it, and view the results all without touching the mouse or fiddling with windows. Us developers repeat this cycle many times a day, and the distraction plus the seconds or minutes taken to do this add up.

I've managed to get the basic CRUD web stuff going with Rails's wonderful scaffolding feature generated from an Access .mdb supplied by my boss. 32 tables generated some 503 files !

Now I want to start writing some application logic, and want to set up a little sandbox where I can work within a single file, writing tests, writing the application class, and executing by pressing a key combination in SlickEdit.

Scaffolding in Rails has generated 256 tests already, which can be executed with "rake test_unit" for example, but how do you run a single test ?
This post http://nubyonrails.com/articles/2006/07/28/foscon-and-living-dangerously-with-rake got me started, and running "rake test_unit" produces
c:/ruby/bin/ruby -Ilib;test "c:/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake/rake_test_loader.rb"
followed by all the test files to run.

So, I just needed to run this command line appended with the current buffer name to execute a test that I'm currently editing. If it isn't a test (doesn't end with _test before the extension) then I execute it as a normal ruby file. I also execute .bat buffers like this, and potentially other extensions.

I ended up with the following code in my vusrmacs.e.

One thing yet to tidy up, is not not hardcode the location of rake_test_loader.rb. I haven't looked into this as yet.

Sorry for the rushed post. I hope its of use to someone anyways.

Gary

boolean EndsWith(_str haystack, _str needle) {
//strings are 1-based
int lp = lastpos(needle,haystack);
return lp == (length(haystack) - length(needle) + 1);
}

_command ExecBuffer()
{
_str pathBuffer = p_buf_name;
//_str extBuffer = get_extension(p_buf_name);

if (p_extension=="rb" || p_extension=="ruby" || p_extension=="rbw") {
_str simpleName=strip_filename(pathBuffer,'PE');
if (EndsWith(simpleName,"_test")) {
start_process()
clear_pbuffer()
concur_command("ruby -Ilib;test \"c:/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake/rake_test_loader.rb\" \"":+pathBuffer:+"\"")
} else {
start_process()
clear_pbuffer()
concur_command("ruby -S -w ":+pathBuffer)
}
//} else if (p_extension=="html" || p_extension== "htm") {
} else if (p_extension=="bat") {
start_process()
clear_pbuffer()
concur_command(p_buf_name);
} else {
}
}

Wednesday, June 28, 2006

SlickEdit - Mouse Free Development

I use SlickEdit whenever I can for all hardcore editing tasks, including C#, NAnt and batch files. I still go to Visual Studio for debugging and Forms design.
SlickEdit is not a good choice if you're trying to save cash, but for someone like me that codes all day, its simply the best IMHO.

I don't use its ability to load VS projects. I create my own SlickEdit workspaces and projects, and in the Project tools I call a actions.bat file with arguments such as /build and /run.
"actions /build" calls NAnt with the task.
Then I have key shortcuts to launch builds. SlickEdit provides Ctrl-Shift-Up/Down arrow to skip between error locations as reported by most compilers.
I do most development using a modified NUnit runner, and runtime progress is reported in a XML file that appears in Slickedit after a run.

All up, I can edit, compile, fix errors and run without leaving slickedit or touching the mouse.

NAnt with SlickEdit

I'm using SlickEdit for writing NAnt .build scripts, which are essentially XML, and I'm finding SlickEdit is quite a nice XML editor.

Here's how to set it up nicely :

1) The SlickEdit completion features are wonderful - better than Visual Studio IMHO, in particular the features by-default bound to Ctrl-Shift-, and Ctrl-Shift-. and Ctrl-Shift-Space. The first two match words before and after the current cursor position. The word that is being used as the source of the completion is highlighted, and each time you press them, it moves backwards and forwards choosing a different match in your buffer. Ctrl-Shift-Space extends the match to more words following the current source word each time you press these keys.

To enable matching for any text, go to "Tools->Macro->Set Macro Variable". Enter def_complete_vars as the variable and a value of 0. This means that completion works not just for code symbols, but any text. See vslick\macros\compword.e for further info.

2) Its not well documented, but SlickEdit will provide further assistance in editing XML if it is given a schema. It works for DTDs and XSD's. NAnt's schema is given at http://nant.sf.net/release/0.85-rc4/nant.xsd.

You have to add a URL mapping in "Tools->Options->URL Mappings". But first download the schema from above URL and save it somewhere local. Then add this URL in the From field and put your local copy in the To field.

You also need certain attribute in your document. Here is a minimal NAnt .build file that SlickEdit understands :

<?xml version="1.0" encoding="utf-8" standalone="no"?>
<project
name = "BuildProj"
basedir=
"."
xmlns=
"http://nant.sf.net/release/0.85-rc4/nant.xsd"
xmlns:nant=
"http://nant.sf.net/release/0.85-rc4/nant.xsd"
xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"http://nant.sf.net/release/0.85-rc4/nant.xsd R:\thirdparty\NAnt\schema\nant.xsd"
xsi:noNamespaceSchemaLocation="http://nant.sf.net/release/0.85-rc4/nant.xsd">



<target name="Build">

Do Stuff Here...


</target>
</project>


After creating this file, click the yellow XML tick on the toolbar to ensure it is "Well formed" (fix any errors). Then click the grey XML tick to check your file against the schema. I regularly get "Error Schema in http://nant.sf.net/release/0.85-rc4/nant.xsd has a different target namespace from the one specified in the instance document." which I have been succesfully ignoring.

Now save, close and reload this file to see the assistance features in action.

  • typing a < will show a drop down list of all the NAnt commands you could use.
  • when between the < and > it will show applicable attributes for the current tag.
  • typing </ will be completed to close the closest previous open tag.
  • recognised tags will be coloured differently to unrecognised ones.


You can also use the Project "Execute" feature and the Build shell window to run NAnt with your file.

All in all, SlickEdit makes a nice general purpose IDE.

Monday, April 24, 2006

Quality Practice #1

Each check in should consist of no less or more than a small number of complete features.

For example, you are about to add feature XYZ to your program. You've already tested it and know exactly what change you are going to make (no experimenting on production code)

1. Update your local repository from your version control system

2. Make your changes

3. Get it to compile, pass tests etc.

4. Check in to the version control system. As a comment describe the change in functional language (what the change achieves), and perhaps include especially important things that other developers should be aware of. You don't need to include minute detail as this can be gotten much more accurately using a diff tool.

TortoiseSVN allows you to see what files have changed and diffs of the code chages you are making. I use this to write my comments, and I scan the changes to ensure I am not accidentally making changes I don't intend to, and to reverse unnecessary changes, like inserted blank lines that weren't intentional (this keeps the repository diffs meaningful and concise)



This way, each revision represents the system at a meaningful point.

Sunday, March 12, 2006

Test Driven Heresy

In my view, some of Test Driven Development's greatest benefits are :

  • Guards developers from breaking each others code

  • Protects a projects quality from going backwards as bugs increase with added features.

  • Provides a series of short term goals (tests that pass, code that runs) for developers to accomplish, fostering a feeling of progress

  • Enables components to be created and tested in isolation from the complexities of the greater system. At integration time, bugs are greatly reduced, and most likely to be integration issues rather than the fault of either the system or the new component.

  • Enforces disciplines such as minimizing dependencies. Your new component can't be dependent on other components in the system if they don't exist in your test environment.

  • Provides confidence that tested components work, eliminating them from suspicion when the system fails. Therefore fault finding is simplified


These benefits are promoted as more than paying for the costs of TDD, mainly significantly more code to create, maintain, version control etc. Sometimes creating the test can be much more complex than creating the component it is testing !

But hold on, what if your environment means that the benefits don't pay for the costs ? What if you are a sole experienced developer working on hardware control code. You don't have the team issues that TDD helps to solve, reducing your benefit, and TDD is much harder to do with external devices that are outside your control, increasing the cost. You may be writing code much like you have been for many years, and is simple enough that it rarely has errors that are not immediately apparent. Here you have a choice of religiously following TDD assuming the gurus know best, or you could do your own cost/benefit analysis and (shock) choose to just write code and test informally by running the system.

So here's my more liberal TDD for such situations :

  • Tests don't have to pass or fail to be of benefit. Sometimes evaluating the output requires heuristics way beyond the code you are testing. Just use a “test” to exercise code and output values as it progresses, and manually inspect the output to determine whether it is doing the right thing. You have still gained the TDD benefits of building and running code in isolation from the system, the satisfaction of seeing it work, the discipline of designing the code from a users point of view, and many errors will be apparent anyway, especially those that throw exceptions. Note that tools such as JUnit and NUnit don't encourage this kind of use, believing that a test should only output information on failure. It is still possible however.

  • For a given major project called BlueWhale, set up two additional projects called BlueWhale.Tests and BlueWhale.RnD. BlueWhale.RnD is your playground for trying new things. Here tests are not required to pass or fail. You can begin creating a new class just above the test itself, for convenience, without the hassle of making new files or changing the production code. It might have dependencies that aren't dependable. It doesn't matter because you might change tack and blow it away anyhow. When a test is working, and the test subject becomes worthy of inclusion in the system, graduate the test by moving it into BlueWhale.Tests. Here it should pass or fail, and follow all the usual TDD requirements. BlueWhale.RnD is also a place to demote code that was in the production system, has been replaced, but may still be of value in the future.

  • Apply Pareto's 80/20 Principle to test coverage. Some things are too obvious to write a test for. Your time would be better spent elsewhere (such as writing more tests for more critical areas), or in single stepping it through in the debugger, inspecting variables as it goes. Or simply printing it out and scrutinizing it. Testing high level code inherently tests the low level code it uses (though some errors will escape this), and so perhaps attempt to cover most code at the high level, and drill down with more tests over time.

  • You can cut corners and add dependencies in tests that you wouldn't in production code. Runtime performance is likely not an issue. You can use third party libraries, alternative script languages (Python, Ruby) and external tools (GNU Diff). The worst that can happen is that change causes these tests to go on passing when the code under test fails, but a more likely scenario is that they will fail due to their dependencies. So fix them. No harm done.

  • You still gain the benefits of designing the code to be testable, writing from a users point of view, developing without the hassles of the larger system and so on.

Confident Programming

I propose that a core goal in the set up of a development environment, including choice of language, in-house code libraries, version control, build process, requirements management, third party components etc is to enable developers to write application code confidently.

To write code confidently requires :

  • The ability to hold a complete concept of the problem at hand in the brain at once.

  • Definition and separation of the problem from other problems.

  • Definition and separation of the the problem from its dependencies.

  • The ability to experiment and undo. Version control to allow experimentation without affecting other developers or production code.

  • Coding conventions – to avoid having to think about how to name or format things.

  • In-house code libraries – basically a collection of solved problems. These must be robust, reliable, organised, well maintained and easy to add to. A graduate developer once asked me “how can you write reliable code on top of unreliable libraries ?”. Good question – if its possible its probably not worth the effort. My answer was that you build libraries from the ground up that are reliable.


Without these things, an astute developer can be overwhelmed by all the possible side effects and implications of what they are doing. A less attentive developer will overlook these things, leaving them to be discovered in testing, when major changes are required, or worse, in the hands of customers. Testing becomes more important as a developer is less able at coding time that what they are doing is correct, or will remain correct as the code it is built on shifts over time. Application code becomes more platform specific, less agile, and includes more low level detail.
While much has been said of defensively finding and removing bugs from software, there is disproportionate discussion about how to create an environment where bugs are not created in the first place.