#!/usr/bin/ruby
#
# mongrels by Gary McGhee
#
# This is a startup script for use in /etc/init.d
# and for starting/stopping etc all packs of mongrels at once
#
# based on code from http://www.simplisticcomplexity.com/2006/9/26/start-and-stop-all-your-mongrel_cluster-processes/
#
# features :
# + finds and runs your rails apps assuming they are under
# APP_DIR and have a mongrel_cluster.yml file in their
# config folder
# + uses mongrel_cluster clean feature to clean up processes
# without pid files. This is important ! Otherwise processes
# can survive a deployment and then your old version will
# keep running despite the deployment.
# + status feature gives details on procs and pid files for all
# apps
# + by default, commands apply to all found apps
# + stop will do nothing and won't give an error if the app is
# not running
# + start and restart will do nothing and won't give an error if
# the app is already running
# + restart will just start if it is currently stopped
#
# Installation
# 1) put this in /etc/init.d/
# 2) chmod 755 /etc/init.d/mongrels
# 3) update-rc.d -f mongrels defaults
#
# Sample mongrel_cluster.yml :
#
# ---
# cwd: /var/www/defaultdomain/current/
# port: "8000"
# environment: production
# #address: 127.0.0.1
# pid_file: /var/run/mongrel_cluster/my_app.pid
# servers: 8
# #log_file: /var/www/defaultdomain/current/log/mongrel.log
#
require 'fileutils'
require 'yaml'
SCRIPT_NAME = 'mongrels'
APP_DIR = '/var/www'
SCRIPT_VERSION = '1.0'
DEFAULT_PID_FILE='/var/run/mongrel_cluster/mongrel.pid'
DEFAULT_USER='root'
def cluster_config_file(app)
File.join(APP_DIR, app, "current/config/mongrel_cluster.yml")
end
def load_cluster_config(aFile)
(YAML::load(File.open(aFile)) rescue nil)
end
def is_cluster?(app)
File.exists?(cluster_config_file(app))
end
# not currently used, but left for potential future use
def is_started?(app,aConfig=nil)
pid_file = (aConfig && aConfig['pid_file']) || DEFAULT_PID_FILE
pid_path = File.dirname(pid_file)
pid_pattern = File.basename(pid_file).sub(/\.([^.]*)$/,'.*.\1')
return !Dir[File.join(pid_path,pid_pattern)].empty?
end
def cluster_command(aApp,aCommand)
config_file = cluster_config_file(aApp)
config ||= load_cluster_config(config_file)
pid_file = (config && config['pid_file']) || DEFAULT_PID_FILE
user = (config && config['user']) || DEFAULT_USER
pid_path = File.dirname(pid_file)
pid_pattern = File.basename(pid_file).sub(/\.([^.]*)$/,'.*.\1')
if %w(start restart).include? aCommand
`mkdir -p #{pid_path}`
`chown #{user}:#{user} #{pid_path}`
end
options = (%w(start stop restart).include? aCommand) ? '--clean' : ''
`mongrel_rails cluster::#{aCommand} #{options} -C #{config_file}`
end
puts
cluster_apps = Dir.open(APP_DIR).to_a.delete_if { |aApp| !is_cluster?(aApp) }
VERBS = {
'start' => 'starting',
'stop' => 'stopping',
'restart' => 'restarting',
'status' => 'getting status for'
}
case command = ARGV.first
when 'start','stop','restart','status'
cluster_apps.each do |aApp|
puts VERBS[command]+' '+aApp
puts cluster_command(aApp,command)
end
when 'version'
puts "#{SCRIPT_NAME} version #{SCRIPT_VERSION}"
exit
else
puts "Usage: #{SCRIPT_NAME} {start|stop|restart|version|status}"
exit
end
Just Enough Software Quality
This blog brings together the ideals of Test Driven Development and other Software Quality practices with the reality of smalltime commercial software development. I am trying to apply Just Enough of Waterfall, Agile, Extreme Programming and Test Driven Development (TDD) etc to benefit from their returns, while avoiding the significant costs of following them to the letter. I am working on web applications with Adobe Flex and Ruby on Rails.
Monday, February 11, 2008
Script for starting and controlling Rails Mongrel clusters automatically
This script is designed to be a simple way of launching all rails apps installed in a root path (default:/var/www) and starting/stopping them as required.
Wednesday, February 06, 2008
Single common root path for team development with Linux, Windows and Eclipse
For a long time I've adopted the philosophy of having a single, global, path root for all development that is consistent across the whole software team's machines. On Windows I've used SUBST to create a virtual drive letter (eg. R: for 'Repository). I point R to the root of the local working copy of the repositiory I'm currently using. If I have 2 working copies (eg. the trunk and my current branch) then I'll update R: to my current one.
This means that in configuration files eg. for my editor or IDE, they always get the correct files even when I've switched branches. If the whole team does it, we can even share configuration files, build scripts etc via the repository and they will work, regardless of where the current working copy on each machine is actually located.
Well now I'm using Adobe Flex Builder 3 Beta 2 on Linux, which is based on Eclipse.
For Linux I've arrived at the following as my best practice :
ln -s /path/to/working_copy /mnt/root_name
I'm actually running Linux in a virtual machine, and my working copy is shared in via VMWare's shared folders. On Windows I use drive V: via SUBST, and so my main link for general repository use under Linux is now /mnt/v/.
Thats all fine, but not under Eclipse. Using /mnt/v under Eclipse works fine, but in the .project file it expands the symbolic link to something long and ugly which won't mach other developers machines. Damn!
Thankfully there is a simple solution. In Eclipse go to the menu Window->Preferences/General/Workspace/Linked Resources.
Here you can add variables for use elsewhere in the workspace. The DOCUMENTS variable should already be defined here.
So click "New..." and then enter :
Name: V (I'm using capitals to follow the existing convention)
Location: /mnt/v
and click OK and OK again.
Now go to "Project->Properties/Flex Build Path" and for any source folders change them to something like "${V}/this/that/etc". If you add new folders by navigating to /mnt/v first, they will automatically get the ${V} treatment.
Also note that in .project you will get :
instead of :
(Note the lack of curly brackets in the first example)
This .project can now be version controlled and used team-wide. However I think the V variable definition will have to be done per machine. Of course, the /mnt/v will have to be done per machine, but that should be maintaned by each developer anyway.
This means that in configuration files eg. for my editor or IDE, they always get the correct files even when I've switched branches. If the whole team does it, we can even share configuration files, build scripts etc via the repository and they will work, regardless of where the current working copy on each machine is actually located.
Well now I'm using Adobe Flex Builder 3 Beta 2 on Linux, which is based on Eclipse.
For Linux I've arrived at the following as my best practice :
ln -s /path/to/working_copy /mnt/root_name
I'm actually running Linux in a virtual machine, and my working copy is shared in via VMWare's shared folders. On Windows I use drive V: via SUBST, and so my main link for general repository use under Linux is now /mnt/v/.
Thats all fine, but not under Eclipse. Using /mnt/v under Eclipse works fine, but in the .project file it expands the symbolic link to something long and ugly which won't mach other developers machines. Damn!
Thankfully there is a simple solution. In Eclipse go to the menu Window->Preferences/General/Workspace/Linked Resources.
Here you can add variables for use elsewhere in the workspace. The DOCUMENTS variable should already be defined here.
So click "New..." and then enter :
Name: V (I'm using capitals to follow the existing convention)
Location: /mnt/v
and click OK and OK again.
Now go to "Project->Properties/Flex Build Path" and for any source folders change them to something like "${V}/this/that/etc". If you add new folders by navigating to /mnt/v first, they will automatically get the ${V} treatment.
Also note that in .project you will get :
<link>
<name>[source path] etc</name>
<type>2</type>
<locationURI>V/this/that/etc</locationURI>
</link>
instead of :
<link>
<name>[source path] etc</name>
<type>2</type>
<location>/mnt/hgfs/something/this/that/etc</location>
</link>
(Note the lack of curly brackets in the first example)
This .project can now be version controlled and used team-wide. However I think the V variable definition will have to be done per machine. Of course, the /mnt/v will have to be done per machine, but that should be maintaned by each developer anyway.
Tuesday, December 04, 2007
XRay debugger for Adobe Flex 3
XRay may be the best kept secret in Flex development tools. Its taken me a few months of Flex development to discover it. I found it looking for a runtime inspector like there is available for Delphi. I want to modify my chart properties and see the results in real time. I would also like to evaluate actionscript at runtime much like IRB in Ruby. XRay supports both these things, and much more.
THe Flex 3 debugging features are quite good, but I regularly get frustrated with the evaluation/watch feature failing to evaluate or giving scant detail. Of course you can use trace(), but that requires modifying your code, recompiling and getting back to the correct point in the code, every time you want to inspect something new.
XRay promises much more. Its not trivial however to find out how to get it working with Flex 3. I've just got it going, so here's the scoop :
1) XRay consists of a "connector" and the "interface" itself.
2) There are many versions of the connector, for Flash, Flex, Haxe and older versions of these. For Flex 3, we'll use the SWC version from
http://code.google.com/p/osflash-xray/downloads/list
At this writing, the latest was http://osflash-xray.googlecode.com/files/Xray_Flex2_Library_v0.5.swc
3) I placed this file in my thirdparty\flex\ folder which is used across multiple projects.
4) From the Flex menu, I went to the Project->Properties, selected "Flex Build Path" and then the "Library path" tab. I selected "Add SWC Folder..." and chose the thirdparty\flex\ folder. I then opened the main mxml file of a simple test project and inside the script tags I typed :
"import com."
blitzagency didn't appear in the lookup list at this point. I fiddled around and added the SWC specifically using the "Add SWC..." button under "Add SWC Folder..."
blitzagency was now found. It may have been just a delay issue for Flex to parse the SWC. Also I already had the thirdparty\flex\ folder in my source path (may be relevent).
5) I completed the "import" line as :
import com.blitzagency.xray.inspector.flex2.Flex2Xray;
and immediately after added :
private var xray:Flex2Xray = new Flex2Xray();
6) My project now compiled and ran (I used debug mode)
7) We now need a version of the XRay interface. I used the SWF version from http://www.rockonflash.com/xray/flex/Xray.zip
8) While my test project was running, I ran XRay.swf and the XRay window appeared. Clicking Go under "Application View" showed a tree with my application name.
Thanks to Steve Mathews and "John" for their help on the osflash mailing list.
References :
The main XRay site : http://osflash.org/xray
Flex notes on Google Code : http://code.google.com/p/osflash-xray/wiki/Flex2_SWC_notes
THe Flex 3 debugging features are quite good, but I regularly get frustrated with the evaluation/watch feature failing to evaluate or giving scant detail. Of course you can use trace(), but that requires modifying your code, recompiling and getting back to the correct point in the code, every time you want to inspect something new.
XRay promises much more. Its not trivial however to find out how to get it working with Flex 3. I've just got it going, so here's the scoop :
1) XRay consists of a "connector" and the "interface" itself.
2) There are many versions of the connector, for Flash, Flex, Haxe and older versions of these. For Flex 3, we'll use the SWC version from
http://code.google.com/p/osflash-xray/downloads/list
At this writing, the latest was http://osflash-xray.googlecode.com/files/Xray_Flex2_Library_v0.5.swc
3) I placed this file in my thirdparty\flex\ folder which is used across multiple projects.
4) From the Flex menu, I went to the Project->Properties, selected "Flex Build Path" and then the "Library path" tab. I selected "Add SWC Folder..." and chose the thirdparty\flex\ folder. I then opened the main mxml file of a simple test project and inside the script tags I typed :
"import com."
blitzagency didn't appear in the lookup list at this point. I fiddled around and added the SWC specifically using the "Add SWC..." button under "Add SWC Folder..."
blitzagency was now found. It may have been just a delay issue for Flex to parse the SWC. Also I already had the thirdparty\flex\ folder in my source path (may be relevent).
5) I completed the "import" line as :
import com.blitzagency.xray.inspector.flex2.Flex2Xray;
and immediately after added :
private var xray:Flex2Xray = new Flex2Xray();
6) My project now compiled and ran (I used debug mode)
7) We now need a version of the XRay interface. I used the SWF version from http://www.rockonflash.com/xray/flex/Xray.zip
8) While my test project was running, I ran XRay.swf and the XRay window appeared. Clicking Go under "Application View" showed a tree with my application name.
Thanks to Steve Mathews and "John" for their help on the osflash mailing list.
References :
The main XRay site : http://osflash.org/xray
Flex notes on Google Code : http://code.google.com/p/osflash-xray/wiki/Flex2_SWC_notes
Friday, November 16, 2007
Gutsy Gibbon for Adults
I'm building a clean VM of Ubuntu 7.10 "Gutsy Gibbon" and thought I'd remove the fancy Compiz window manager. I then discovered the previous version was running one called MetaCity and looked into reenabling that. Uninstalling Compiz isn't enough - you have to do the following to enable MetaCity, otherwise you get some weird effects.
So, from http://ohioloco.ubuntuforums.org/showthread.php?p=3770282 :
As root (sudo gedit), modify: /usr/share/gnome/default.session and change
TO:
So, from http://ohioloco.ubuntuforums.org/showthread.php?p=3770282 :
As root (sudo gedit), modify: /usr/share/gnome/default.session and change
0,RestartCommand=gnome-wm --sm-client-id default0
TO:
0,RestartCommand=gnome-wm --default-wm /usr/bin/metacity --sm-client-id default0
Thursday, December 14, 2006
Convincing Prospective Employers that your Delphi skills are relevent to C# / VB / .Net
So you know Delphi and find yourself in the job queue again. The Delphi job postings are much less than last time you were looking, and some of them are for converting to .Net. How do you go about crosstraining to .Net, and convincing prospective bosses that your skills are transferrable ?
- One answer lies above; look for jobs involving converting Delphi projects to C# (or Java, if that interests you). They'll value your Delphi skills and pay you to learn another language.
- Look into certification. There is a whole industry around training towards MCSD and MCAD, both at home with books, or via training courses. In my experience, books are a far more effective and much cheaper way to learn anything substantial.
- Provide a Delphi to C# comparison with your resume.
- At least start working towards MCSD, and say that you are in interviews and in your resume. That sounds like you are serious and approaching this crosstraining in a professional manner. Also, I would guess that whether you are 10% or 80% towards finishing would make little difference to many employers.
- Start a real world project eg. get involved in an open source project, or build an ASP.Net website. I started this with www.seekdotnet.com and was impressed with their SQL Server package features and price, but can't speak for the quality of their service as I haven't launched my site yet. Again, mention this in your resume and in interviews.
- A website project (more than a Windows Forms project) is particularly impressive as they can very easily try it out and see that you're capable of creating something real.
- Read books about resume writing eg. "What Colour is Your Parachute" is the classic. Spend days on your resume. A day spent on it could equal a month worth of waiting for job ads to appear, waiting for them to get back to you etc.
- Research companies you would like to work for. Microsoft provides a listing of "Partners" on its website, which is a pretty complete list of .Net shops in your area. Get a directory of your local "technology park" or precinct. Look for government innovation development programs, and the list of companies that may provide.
- Don't just wait for job ads to appear. Print out your resume, dress up, and approach them. My theory here is that when job ads are written, critieria is decided upon based on their ideal employee with phrases such as "12 months .Net experience". Immediately you are behind with your measley 2 weeks of .Net experience. When are face to face with them in their office however, you are a real person and they can judge your character, enthusiasm etc and may even make a position for you that didn't previously exist. They may not have time to go through with the hassle of advertising when really they need you, or they may be just about to advertise. I did this for about 4 days and got one 4 week C# contract followed by a job offer, 1 call encouraging me to apply for a new position and one email asking me if I was still looking for work, 4 weeks later. This was after 3 months of answering job ads with little response. By the way, my new job still came from answering an ad.
- Keep all job ads that interest you, even ads for the wrong job but the right sort of company. They may be worth approaching later, and may contain important info such as the name of a manager to call. Having someone to ask for is an easy way to get past a difficult receptionist. Also, if you were the second best candidate this time, you might get the job next time. Resist the urge to resent that they didn't choose you.
- Companies using Delphi, past and present are still your friend. They will be easy to convince that your Delphi skills are valuable, even if they no longer use it. My 4 week C# contract was with a previosly Delphi based company, and my current boss has done some serious Turbo Pascal work in the past.
Friday, December 08, 2006
Alternative to SUBST: Local Network Shares
Over the past few years I have set up 2 software teams with associated tools, file structures and processes in different companies.
Of high importance is the structure of the respository in the Version Control System.
Both times I have been aiming for :
Currently we are working with Subversion and TortoiseSVN, NAnt and CruiseControl.Net. Our code is in C# and C++.
The above has been achieved and working well using the Windows SUBST command to create a drive R: (for repository) under which all files are stored. Under this the folders are projects, tools, thirdparty, total (for the build) and rnd (for Research aNd Development).
When I upgraded to V1.4 of Tortoise however, the icons indicating file status (up to date, modified, added etc) began behaving strangely. On reading the Tortoise list (subject: "TortoiseSVN Bug with overlay icons on network drives") I see from Stefan Küng gmail.com> :
dfa.com> says :
So I tried it, and hit the next problem : .Net's Code Access Security. Any mounted network share (even from your own machine) is treated as of the "Local Intranet" security zone, which is by default of "medium trust". This isn't good enough for some things, such as NAnt and Visual Studio.
After further digging, and thanks to the references below I now have my local share R: drive fully trusted, without affecting the rest of the local intranet zone.
Here's the answer :
Now it just needs a batch file to be able to do :
MapDrive R: D:\Repos\DriveR
Maybe another day...
References
Of high importance is the structure of the respository in the Version Control System.
Both times I have been aiming for :
- ability for the developer to have multiple local working copies, and flexibility to locate their working copies in any folder on any drive.
- consistency across all developer machines
- ability to use and version control the files of any tool, and have those files be usable by other developers
- all necessary files under a single root.
Currently we are working with Subversion and TortoiseSVN, NAnt and CruiseControl.Net. Our code is in C# and C++.
The above has been achieved and working well using the Windows SUBST command to create a drive R: (for repository) under which all files are stored. Under this the folders are projects, tools, thirdparty, total (for the build) and rnd (for Research aNd Development).
When I upgraded to V1.4 of Tortoise however, the icons indicating file status (up to date, modified, added etc) began behaving strangely. On reading the Tortoise list (subject: "TortoiseSVN Bug with overlay icons on network drives") I see from Stefan Küng
> The status cache can't work reliably with SUBST drives! The cache worksLater
> by monitoring the filesystem for changes. Every change fires an event
> which the cache catches and acts accordingly. But if you have a SUBST
> drive, then even though you have two (or more) paths that point to the
> very same location on the filesystem, only one event is fired. Which
> means you will *always* get unpredictable result
Just a suggestion, but have you tried loopback mounting a network drive?Hey! I never thought of that !
Share the folder you create a subst of as a private share (end the share
name with $), read/write only by the user (or read/write only by the
machine, if you prefer), and mount that network drive as another drive
letter. I used to use subst too, but found that too many programs broke
with it. Never had a problem with the loopback network drive.
So I tried it, and hit the next problem : .Net's Code Access Security. Any mounted network share (even from your own machine) is treated as of the "Local Intranet" security zone, which is by default of "medium trust". This isn't good enough for some things, such as NAnt and Visual Studio.
After further digging, and thanks to the references below I now have my local share R: drive fully trusted, without affecting the rest of the local intranet zone.
Here's the answer :
- Share the folder you want to mount
- Map a drive letter to it.
- execute the following lines in a dos shell or batch file :
C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\CasPol.exe -q -pp off -machine -addgroup 1 -url file://R:/* FullTrust -name "Drive_R" -description "R: Local Network Drive"each line adds the policy for a different version of .Net. Each version's security operates independently.
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CasPol.exe -q -pp off -machine -addgroup 1 -url file://R:/* FullTrust -name "Drive_R" -description "R: Local Network Drive"
Now it just needs a batch file to be able to do :
MapDrive R: D:\Repos\DriveR
Maybe another day...
References
- Using CasPol to Fully Trust a Share
- How Do I...Script Security Policy Changes?
- Getting CLR Security Right - Seeing Double
Monday, September 11, 2006
Firing current buffer as a Rails test in SlickEdit
I've started a new adventure building a fairly large distributed database driven website using Ruby on Rails and eventually Adobe Flex for a pretty front end(this is *very* cool combination). I've got SlickEdit 11 as my IDE which now supports both Ruby and ActionScript for Flex - nice timing there.
Anyway, what inspired this post was attempting to use my RnD-style development with Ruby's Test::Unit, which implies using Rake. I strongly believe that it is a major productivity boost to be able to edit code, run it, and view the results all without touching the mouse or fiddling with windows. Us developers repeat this cycle many times a day, and the distraction plus the seconds or minutes taken to do this add up.
I've managed to get the basic CRUD web stuff going with Rails's wonderful scaffolding feature generated from an Access .mdb supplied by my boss. 32 tables generated some 503 files !
Now I want to start writing some application logic, and want to set up a little sandbox where I can work within a single file, writing tests, writing the application class, and executing by pressing a key combination in SlickEdit.
Scaffolding in Rails has generated 256 tests already, which can be executed with "rake test_unit" for example, but how do you run a single test ?
This post http://nubyonrails.com/articles/2006/07/28/foscon-and-living-dangerously-with-rake got me started, and running "rake test_unit" produces
c:/ruby/bin/ruby -Ilib;test "c:/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake/rake_test_loader.rb"
followed by all the test files to run.
So, I just needed to run this command line appended with the current buffer name to execute a test that I'm currently editing. If it isn't a test (doesn't end with _test before the extension) then I execute it as a normal ruby file. I also execute .bat buffers like this, and potentially other extensions.
I ended up with the following code in my vusrmacs.e.
One thing yet to tidy up, is not not hardcode the location of rake_test_loader.rb. I haven't looked into this as yet.
Sorry for the rushed post. I hope its of use to someone anyways.
Gary
boolean EndsWith(_str haystack, _str needle) {
//strings are 1-based
int lp = lastpos(needle,haystack);
return lp == (length(haystack) - length(needle) + 1);
}
_command ExecBuffer()
{
_str pathBuffer = p_buf_name;
//_str extBuffer = get_extension(p_buf_name);
if (p_extension=="rb" || p_extension=="ruby" || p_extension=="rbw") {
_str simpleName=strip_filename(pathBuffer,'PE');
if (EndsWith(simpleName,"_test")) {
start_process()
clear_pbuffer()
concur_command("ruby -Ilib;test \"c:/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake/rake_test_loader.rb\" \"":+pathBuffer:+"\"")
} else {
start_process()
clear_pbuffer()
concur_command("ruby -S -w ":+pathBuffer)
}
//} else if (p_extension=="html" || p_extension== "htm") {
} else if (p_extension=="bat") {
start_process()
clear_pbuffer()
concur_command(p_buf_name);
} else {
}
}
Anyway, what inspired this post was attempting to use my RnD-style development with Ruby's Test::Unit, which implies using Rake. I strongly believe that it is a major productivity boost to be able to edit code, run it, and view the results all without touching the mouse or fiddling with windows. Us developers repeat this cycle many times a day, and the distraction plus the seconds or minutes taken to do this add up.
I've managed to get the basic CRUD web stuff going with Rails's wonderful scaffolding feature generated from an Access .mdb supplied by my boss. 32 tables generated some 503 files !
Now I want to start writing some application logic, and want to set up a little sandbox where I can work within a single file, writing tests, writing the application class, and executing by pressing a key combination in SlickEdit.
Scaffolding in Rails has generated 256 tests already, which can be executed with "rake test_unit" for example, but how do you run a single test ?
This post http://nubyonrails.com/articles/2006/07/28/foscon-and-living-dangerously-with-rake got me started, and running "rake test_unit" produces
c:/ruby/bin/ruby -Ilib;test "c:/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake/rake_test_loader.rb"
followed by all the test files to run.
So, I just needed to run this command line appended with the current buffer name to execute a test that I'm currently editing. If it isn't a test (doesn't end with _test before the extension) then I execute it as a normal ruby file. I also execute .bat buffers like this, and potentially other extensions.
I ended up with the following code in my vusrmacs.e.
One thing yet to tidy up, is not not hardcode the location of rake_test_loader.rb. I haven't looked into this as yet.
Sorry for the rushed post. I hope its of use to someone anyways.
Gary
boolean EndsWith(_str haystack, _str needle) {
//strings are 1-based
int lp = lastpos(needle,haystack);
return lp == (length(haystack) - length(needle) + 1);
}
_command ExecBuffer()
{
_str pathBuffer = p_buf_name;
//_str extBuffer = get_extension(p_buf_name);
if (p_extension=="rb" || p_extension=="ruby" || p_extension=="rbw") {
_str simpleName=strip_filename(pathBuffer,'PE');
if (EndsWith(simpleName,"_test")) {
start_process()
clear_pbuffer()
concur_command("ruby -Ilib;test \"c:/ruby/lib/ruby/gems/1.8/gems/rake-0.7.1/lib/rake/rake_test_loader.rb\" \"":+pathBuffer:+"\"")
} else {
start_process()
clear_pbuffer()
concur_command("ruby -S -w ":+pathBuffer)
}
//} else if (p_extension=="html" || p_extension== "htm") {
} else if (p_extension=="bat") {
start_process()
clear_pbuffer()
concur_command(p_buf_name);
} else {
}
}
Subscribe to:
Posts (Atom)