Wednesday, December 26, 2012

20121227 Green computers are cool.


This is about computers, hang with me or skip to the end....

Ok, I'm not a "green" person.  Personally, the entire argument is a waste of time, and the people who started the argument know it.  Both sides of the argument are wrong, and if they stop shouting people down long enough to talk rationally about it, they'd have to admit to it.

People burn hydrocarbons of different forms as a way to stay alive.  Yep, it's that simple.  If we stop burning hydrocarbons on a mass scale, a whole bunch of someones have to stop living.  There are only 3 options to the entire debate.  #1 Invent a *REAL* technological alternative which is better than hydrocarbons (or creates hydrocarbons from an acceptable source).  Or, #2, kill everyone.  Or, #3, deal with it.

#1 will happen automatically the moment someone invents it.  No one will need to be a pinhead if someone invents a free energy machine that provides a 1 gigawatt baseload power source with an usable footprint that is safe and cost effective.  The Green Revolution will be automatic and will happen at Internet speed...  all by itself.  The less the greenie heads are involved, the faster it will happen.  So, if you're a greenpeace-r and the magic energy source shows up, stay calm and let paradise on earth happen without being a pain in the ass.

#2 is just stupid.  If you're verbal about population control, shut up and do something productive.  There's only one effective method of population control, historically or statistically speaking.  Make everyone happy. This is pathetically easy to show, and most anyone who's looked into the subject knows it's the answer.  The better off a population is (which can be measured by how much electrical power is available to them), the fewer kids it has.  But, usually, the people who are verbal about how there are too many people on the planet are the ones that advocate for the policies that make people more miserable.  Every time I hear someone say something along those lines "There are too many people!  Let's make the situation worse by making everyone miserable!" I just want to beat the idiot senseless... but that would be purposeless because they have no sense to start with.  To reduce CO2 output to the levels they would need in the timeframes they are talking, you'd have to remove far too many people from the planet for anyone to accept.  Besides, the Population Controllers would be the first ones removed. which would take us right back to options #1 or #3.

#3 is what's going to happen, period.  No reason to shout about it, it's just the fact.  Stop your internal dialog, and don't bother giving me any lip.  That's what's going to happen barring #1 happening.  Get over it.

I feel so much better now that I have that off my chest.... where was I?  :-)

---------------------------In your face reality ends here -----------------------------

------------------------Dialog about computers starts here --------------------------

So, having said all that...  I have always had a thing for efficiency.  Mainly because I have a engineers mentality.  If you have a choice between X that uses Y amount of power, and X that uses Y/2 amount of power, why wouldn't you use the more efficient of the two choices?

My uber cool Intel I7 has finally reached the point that I can't take it any more.  The motherboard ethernet ports died over a year ago.  The USB has been acting flaky for longer than that, but recently has taken to just turning off after an hour or two of operation (Blogger auto-saves are a great feature).  The past few months, the computer will just hang solid, blue screen randomly, lock up in the bios screen or during post or during the Microsoft F8 RAM test.

So, that's it.  Can't take it anymore, I need a new computer.  So, I started what I do every time I have something like this.  I made a spreadsheet, and started trying to create a model that will tell me which options will give me the perfect combination that I desire.  (It took me 3 years to pick a wood stove.  7 years to pick a kit plane to build.  Our current dog took 6 months of breed study.  And, I wish we took more than an hour to pick out a dishwasher (piece of junk).)

https://docs.google.com/spreadsheet/ccc?key=0AvDx0QSgEqOodGRhb21GeTJPd2pSTm8tdnFGN3habkE

In the old days, it was normally just Price and Performance.  A simple Performance Benchmark (or combination of benchmarks) divided by price, gave a simple answer, and then factor in how much I was willing to spend, and voila!  Rob has his new computer.

Now days, I factor in another number (which makes the entire analysis/decision process that much more fun and interesting).  Watts.  Basically, how much power is the entire system going to use.  There are a number of places that you can get the designed Watt TDP figure for each CPU and combination of options.

The main reason for this is that it's not uncommon for high end systems to run 500 to 1000 watts or more.  A 500 watt system will burn alot of MONEY in power over a month if you're running it hard.  If you factor in a years amount of power use (or two years, or three, depending on how long you normally go before replacing your computers), then that can really change how you view each system.

If you run your computers 24x7, idle most of the time, knowing the idle load of each system becomes important.  If you only turn them on a few times a month, that's important as well.

All of this helps buy the correct system.  It's not hard to build a system which could average 200 watts/hr.  And, if you average that over a year with my local power cost of $0.15 a kwhr (yes, I have cheap US electric, thank you very un-green fracking).  With a 3 year replacement schedule, that's $750 over that timeframe.  If a slightly more expensive system uses half that power (between AMD and Intel, it's entirely possible), you can justify upgrading to a FASTER system which is also more power efficient.

Obviously, if you're some poor soul that lives where power is more expensive, that makes it that much easier to justify the better, faster, more power efficient systems.

All of this information is out there on a variety of websites...  Just takes a little bit of time to collect it and put it together.  I will say this, Atom boards are excellent for total cost if your processing requirements are modest.  Trick out an atom, and it's cheap compute, up to a limit of course.

So, help the greenpeace-rs.  They want you to cut back on your power use.  If it makes sense, use less power by buying a better faster computer.

Or, you could invent a dark energy machine and usher the world into the new Age of Star Trek.  There are a couple technologies which do hold promise, but I'm not holding my breath.

Rob

Sunday, December 23, 2012

20121223 And.... all of those scripts make this:

OMGWTFBBQ

This is the home Internet link for the last 18 hours. Moo.










Same connection before the dark times came.


Edit:  It's official.  I asked my 7 year old which graph was "good" and which one was "bad".  He correctly identified the good and bad graphs.  I think these graphs are now at the level that management can use them.

20121223 Weialgo graphing.



As I stated in an earlier post, weialgo graphing is meant to give a statistical relative understanding of how well the network is working (how sloppy it is) over time.

What does that mean?  I wanted non-network engineers to be able to look at a graph and either go "That looks pretty good." or "OMGWTFBBQ That sucks!!!", quickly.

I don't want to sit around for hours trying to explain the difference between pinging between Los Vegas and San Jose, and between Chicago and Singapore (you see, there's this big chunk of dirt that we all ride on...).

Just because Singapore is on the other side of the Earth from Chicago doesn't mean that I should hold that against Singapore.  Until space warping technology is developed that allows Singapore and Chicago to get closer to each other (Tesseract anyone?), they will exist at a fixed distance from each other.

So, if the round trip time between Singapore and Chicago is, at it's fastest, 300 milliseconds, 300 ms is now the zero point on the Y axis of the graph.

Now, if I just graph deviation from minimum, this would be a simple "jitter" graph.  The problem with this is that it doesn't really visualize the impact of the network on interactive applications that use reliable network protocols (eg: TCP).  So, there has to be another factor in the model that creates this graph.

The easy thing to do here is to come up with some multiple of the jitter factor and use a multiple.  The problem with that is that there isn't a set standard for this type of multiple, as I'm in uncharted territory with this attempt at graphing networks.  The graph data is going to be questioned, one way or the other, and if the jitter factor were modified, this will just cause a unnecessary conversation on what that multiple should be until an academic body were to suggest a proper model that everyone could agree on.

In other words, I took the easy way out.

Instead of trying to come up with a jitter multiplicative factor that would represent the actual impact to interactive application, I just shifted from relative jitter data to absolute rtt after jitter reaches a certain value.  What value is a good one to shift from jitter to rtt?  That's easy.  If you were able to put in a non-sloppy network between two different remote points, what's the maximum jitter that you think you'd be able to attain?

I picked 50ms.  Realistically, with a proper bandwidth sharing queuing system (Token Bucket-esque), and good lines, I would think that we could keep all jitter for a ping type polling system under 50ms.

The rest is mechanics.   So, into the mechanics.

I have two scripts crontab'ed.

wnrollup.sh and wnrolluphr.sh


root@server0:/mnt/ramdisk/1/weialgo5# cat wnrollup.sh
#!/bin/bash

DATEZ=$(date +%Y%m%d --utc --date='yesterday')

/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/gzip -1 -v /mnt/ramdisk/weialgo5log$DATEZ*.txt
/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/mv -v /mnt/ramdisk/weialgo5log$DATEZ*.gz /storage/logs_weialgo/
/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/bash /storage/weialgo5/wnrunreport.sh $DATEZ
/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/bash /storage/weialgo5/wnmakehtml.sh


root@server0:/mnt/ramdisk/1/weialgo5# cat wnrolluphr.sh
#!/bin/bash

DATEZ=$(date +%Y%m%d --utc)
HOURZ=$(date +%Y%m%d%H --utc)

/bin/sleep 600
/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/gzip -1 -v /mnt/ramdisk/weialgo5log$HOURZ.txt
/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/mv -v /mnt/ramdisk/weialgo5log$HOURZ.txt.gz /storage/logs_weialgo/
/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/bash /storage/weialgo5/wnrunreport.sh $DATEZ
/usr/bin/nice --adjustment=19 /usr/bin/ionice -c 3 /bin/bash /storage/weialgo5/wnmakehtml.sh



root@server0:/mnt/ramdisk/1/weialgo5#

wnrolluphr.sh is run once an hour.  wnrollup.sh is run once a day.  Obviously, depending on the number of systems that you're graphing, you might want to change the workload that you cause the server to do.  This is from one of my home boxes, I run the graphs every hour, as I'm only polling 8 to 10 devices.  At work (on a slower box no less), I'm polling almost 1000 devices, and I run the graphs once a day.

The crontab entries.  Nothing too surprising, you should be able to figure them out.

# Weialgo

55 * * * * root /usr/bin/nohup /bin/bash /storage/weialgo5/wnrolluphr.sh >> /dev/null &
3 19 * * * root /usr/bin/nohup /bin/bash /storage/weialgo5/wnrollup.sh >> /dev/null &

wnrunreport.sh

root@server0:/mnt/ramdisk/1/weialgo5# cat wnrunreport.sh
#!/bin/sh

TARBZ=$(date +%Y%m%d --utc --date='yesterday')

rm -f /mnt/ramdisk/tmp/*

cp /storage/weialgo5/weialgo5.lst /var/www/weialgo/weialgodevices$TARBZ.txt

cat /storage/weialgo5/weialgo5.lst | awk -F'/' '{print $1}' | sort | uniq > /mnt/ramdisk/routerlist.tmp

rm /var/www/weialgo/weialgosummary$1.txt

for IPADDRESSZ in `cat /mnt/ramdisk/routerlist.tmp`;
do
    echo $IPADDRESSZ report before
    rm -f /mnt/ramdisk/tmp/*
    /usr/bin/nice /bin/bash /storage/weialgo5/wngraph.sh $1 $IPADDRESSZ
    echo $IPADDRESSZ report after
done


root@server0:/mnt/ramdisk/1/weialgo5#

wngraph.sh

root@server0:/mnt/ramdisk/1/weialgo5# cat wngraph.sh
#!/bin/sh
#  $1 is the date or dates to be reported on
#  $2 is the IP address of the device being pinged
#  $3 is the name of the device that will be on the graph

echo $1
echo $2
echo $3

rm -f /mnt/ramdisk/tmp/*

ls -1 /storage/logs_weialgo/weialgo5log$1*.gz > /mnt/ramdisk/tmp/testfilelist.txt

for FILENAMEZ in `cat /mnt/ramdisk/tmp/testfilelist.txt`;
do
    NL=$'\n'
    FILENAMEZ=${FILENAMEZ%$NL}
    nice gunzip -c $FILENAMEZ | nice grep "$2," | nice awk -F',' '{print $2}' >> /mnt/ramdisk/tmp/test.csv
    echo $FILENAMEZ after
done

LINEZ=$(nice wc -l /mnt/ramdisk/tmp/test.csv | nice awk -F' ' '{print $1}' | nice head -n 1)
MINIMUMZ=$(nice sort -n /mnt/ramdisk/tmp/test.csv | nice head -n 1000 | tail -n 1)

echo $LINEZ  $MINIMUMZ

yellowcsv=/mnt/ramdisk/tmp/yellow.csv.$$
redcsv=/mnt/ramdisk/tmp/red.csv.$$
greencsv=/mnt/ramdisk/tmp/green.csv.$$
nice cat /mnt/ramdisk/tmp/test.csv | nice awk '{if (($1 - '$MINIMUMZ') < 0.05) print "0"; else if ($1 >= 0.45) print "0"; else print $1}' > $yellowcsv
nice cat /mnt/ramdisk/tmp/test.csv | nice awk '{if ($1 >= 0.45) print $1; else print "0"}' > $redcsv
nice cat /mnt/ramdisk/tmp/test.csv | nice awk '{if (($1 - '$MINIMUMZ') < 0.05) print $1 - '$MINIMUMZ'; else print "0"}' > $greencsv
YELLOWSUMZ=$(awk 'BEGIN {sum=0} {sum = sum + $1} END {print sum}' $yellowcsv)
REDSUMZ=$(awk 'BEGIN {sum=0} {sum = sum + $1} END {print sum}' $redcsv)

echo $1,$2,$YELLOWSUMZ,$REDSUMZ
echo $1,$2,$YELLOWSUMZ,$REDSUMZ >> /var/www/weialgo/weialgosummary$1.txt
sort /var/www/weialgo/weialgosummary$1.txt | uniq > /mnt/ramdisk/tmp/1.txt.$$
mv -f /mnt/ramdisk/tmp/1.txt.$$ /var/www/weialgo/weialgosummary$1.txt
DEVICENAMEZ=$(nslookup $2 | grep name | awk -F'=' '{print $2}' | awk -F' ' '{print $1}' | head -n 1)

gnuplotconfig=/mnt/ramdisk/tmp/testgnuplotconfig.$$
echo "set terminal png size 2800, 1440" > $gnuplotconfig
echo "set terminal png font \"/storage/weialgo5/arial.ttf\" 20" >> $gnuplotconfig
echo "set output '/var/www/weialgo/$2_$1.png'" >> $gnuplotconfig
echo "set key bmargin center horizontal Right noreverse enhanced autotitles box linetype -1 linewidth 1.000" >> $gnuplotconfig
echo "set title '$3 $2 $1 $DEVICENAMEZ'"  >> $gnuplotconfig
echo "set xrange [ 0 : $LINEZ ] noreverse nowriteback"  >> $gnuplotconfig
echo "set yrange [ 0 : 0.7 ] noreverse nowriteback"  >> $gnuplotconfig
echo "set style line 1 lt rgb 'red' lw 1"  >> $gnuplotconfig
echo "set style line 2 lt rgb 'yellow' lw 1" >> $gnuplotconfig
echo "set style line 3 lt rgb 'dark-green' lw 1"  >> $gnuplotconfig
echo "plot '$redcsv' with impulses ls 1 title 'Network Lost', '$yellowcsv' with impulses ls 2 title 'Latency Warning', '$greencsv' with impulses ls 3 title 'Relative Latency'" >> $gnuplotconfig
cat $gnuplotconfig
nice gnuplot $gnuplotconfig

echo gnuplot run

root@server0:/mnt/ramdisk/1/weialgo5#

wngraph.sh is where the "magic" happens....  The process is simple, extract the data, seperate it into three different plot files.  Use gnuplot to output a png graph of the data.

That's it.  Ping thousands of devices and take those thousands of pings to thousands of devices and make graphs of the data.

If anyone wants a copy of the scripts that I'm using, send me an email, I'll email you a tar.bzip2 file of what I have.  robluce1 @ yahoo . com

20121223 Why Weialgo? Essay #3



I'm still not sure how to talk about weialgo and the knowledge I'm trying to convey.

The mechanics of packet communications over a global network using interactive applications over TCP is not a simple one.  Honestly, it's deceptively complex.

My other hobby is airplanes and aircraft homebuilding.  I don't do as much anymore since I lost my medical, but I still help others from time to time in the building of their airplanes, and I've been building a two seat single engine with someone else since 2005.

Building an experimental aircraft is far easier than trying to get a global network to the point of being able to support interactive apps from a single point.  Also, in an experimental aircraft, the person you affect when you buckle yourself in and start the engine up is you (if you do have any passengers, I'm pretty sure the big "EXPERIMENTAL AIRCRAFT" notice on the plane let's them know what they're getting into).  With IT Systems (particularly packet networks), you affect everyone who uses or connects through the system you build and support.

Building an airplane, easy (it's alot of work, but it's easy to do work, easy to understand work, volume does not make hard).  Understanding everything about packet networks and how to make interactive applications work over them at distance, hard.  That's about good as I can say it without writing books.

There's a snippet I wrote for a draft post that I think tries to take this on from a different angle:
-------------------------------------------------------------------------------------------
Most of this conversation hinges on the work of John Nagle.  Most network engineers know Mr. Nagle through his work on his namesake Nagle Algorithm (which makes me think, with the change of focus of weialgo, maybe it'd be more fitting to rename it Nagalgo...  Problem is, he already has an algorithm named after him).  But, what he really should be famous for is inventing the foundational queuing discipline, fair queuing.  Wikipedia and RFC 970.  On top of this, he was instrumental at designing and documenting the initial phases of TCP Congestion Control (RFC 896), which was fundamental to making the early (80's) Internet to work.

Honestly, if kids need to learn about Charles Babbage as part of high school computer classes, I feel that the concepts of John Nagle should be taught as part of any serious college or higher level curriculum which covers the Internet and what makes it work.  I don't know Mr. Nagle, but, he is the first person to document the essential processes and put into practice the concepts that make the Internet *work*.  I understand the concept of the Internet pre-existed his influence on the IETF process, but he is the one that demonstrated how to make it work in a large scale.

Ok, on top of Nagle's work, you have to add Van Jacobson's work on TCP Congestion Control.  And all the work that Mr. Jacobson and other did to make improvements to TCP that have made TCP much more functional (actually usable?) in a global network.  RFC 1122, and RFC 1323 are key pieces of work that must be understood if you want to make IPv4/IPv6 applications work their best over a global network.  Ignorance of these standards is inexcusable for anyone who takes responsibility for applications that run across long WANs or global networks.  It's inexcusable.

On top of this, you have to add a firm understanding of TCP's inner workings via RFC's 29882581, and 2582.  A understanding of TCP SACK options (RFC 2018), TCP Window Scaling (RFC 1323), and the impact of TCP Timestamps (RFC 1323 and RFC 3522).
-------------------------------------------------------------------------------------------

Maybe someday, I'll try to write up something that does a better job of trying to describe why pings (icmp/udp/tcp) are so important to determine whether a network is ready to run interactive applications.  But, I think I'll leave it with this.

Take a PC, hook it up to a bad Internet link (line of sight wireless to a grain silo for example), load up wireshark, and get a subscription to a TCP based interactive game (like WoW or similar).  Then go raid with 24 of your closest friends.  You can also pick a server on the other side of the world if you absolutely refuse to get a bad Internet connection, it's close to the same effect.  After a few years, you will understand.

Pings matter, simply because they show how good, or how sloppy the network is.  Sloppy networks don't run interactive applications well.  Everything is moving to a dependence on the network to support interactive applications from farther and farther distances.

Weialgo graphs a network and shows if it's sloppy or not.  What it takes to make a non-sloppy network is hard.  Sloppy networks are easy.

So, if you're having problems with your network, ping, and graph it over time.  You're probably dealing with a sloppy network.

Wednesday, December 19, 2012

20121219 Why Weialgo? Essay #2


Another email about my newly bad home Internet connection.  Having network problems is fun (for a short while, I always have my 4G phone if I -really- need something, so this isn't that big of a deal at the moment).



subject:  the ping graphs for this morning

Rob Luce <moo@moo.com>
5:05 AM (2 minutes ago)

to Todd 
Not able to sleep as usual, so here's some graphs.  Yes, they're huge, but, just roll with it for a minute.

It's the same information, weialgo at 2 second intervals, pingplotter at 100ms intervals, otherwise it's the same info.  What I'm trying to show is that the views are very similar, it's just a matter of presentation.

Weialgo is going to display information based on attempting to visually give a representation of the impact of variation.  Pingplotter is displaying simple absolute latency.

Ping plotter will show a 300ms rtt latency as 300ms.  Weialgo will show 300ms rtt latency, if that's the BEST latency for the site, as 0ms.  That's important because at a certain level, we can't do anything about the size of the earth.  If the network between Chicago and Singapore has zero congestion, and the best we'll ever get going from here to there and back is 300ms, then that's what it is.  A network should start with realistic expectations, and one of those is, photons and electrons have speed limits.   So, minimum latency is where weialgo starts and it shows it as 0ms.

50ms jitter will show up at 350ms on pingplotter, it'll show up as a green 50ms spike on weialgo.

100ms jitter will show up at 400ms on pingplotter, it'll show up as a yellow 400ms spike on weialgo.  Above 50ms jitter on weialgo, it switches from relative to absolute, to demonstrate the overall effect.  Sloppy packet handling (jitter) will have a bigger impact on a site that's far away than a site that's close by.  Weialgo visualizes that, it was intended to.

200ms jitter will show up at 500ms on pingplotter, it'll show up as a red 500ms spike on weialgo.  Anything above 450ms absolute rtt on weialgo goes red, as when you reach the half second mark, human beings start perceiving the latency in applications.

And a dropped packet is a red strike in both of them.

Looking at the graphs this morning, I wanted to remind/reinforce what I've already explained before.  It's useful, and I think you'll need it.

Weialgo is a modeled graph, pingplotter is an simple absolute graph.  Pingplotter is probably more useful to engineers, but weialgo is more useful to represent the network to the lay person.  Pingplotter has to be translated to people, weialgo attempts to explain why people may be complaining based on what their expectations should be.





For those of you that know Todd, you can now feel very sorry for him, as this is the type of stuff he's had to deal with for almost 20 years now.  24x7 non-stop engineering-ish usually-IT stuff.  IT is my job, my hobby, and my life.  I imagine it can be a little rough to be around me for people who don't take their contribution to the world seriously.  But, I suppose normal human beings like to do things that are easy and soft and safe.  Golf, or bowling, or maybe watching football, or stuff like that I suppose.   Todd is exceptionally gifted, but I guess he has a streak of normal in him.

I've learned to live with the fact that normal people aren't driven to try to do as much as they can.  Yes, I know that will sound funny if you know me.  But, think about it for a bit.  I don't do easy or soft or safe.  There was a time where I couldn't be stopped, so I think God stopped me for me, although I've never thanked him for it.

Anyways...

Here's why weialgo.  Weialgo tries to show the network to people, or why the network seems to change moment by moment throughout the day.  If the packet handling is sloppy, jitter is going to be all over the place.  But base latency isn't something we can control, the distance between point A and B will always be fixed, and there will be incurred latency due to that overall round trip.  So, an absolute graph of latency isn't really appropriate for showing the quality of the network between A and B.  Starting the graph at the minimum observed latency is how it should be shown.  I can't do anything about the fact that the end user is in South Africa, and the server is in Canada.  What I can do is make sure that the expectation for the network response time of that server doesn't change.  That's the key to all of this.

Now, there has to be a realization that if we are sloppy in handing the packets, overall round trip time (rtt) will bite us.  Think of jitter as a dog that bites, and the base rtt as the dog's teeth.  Some networks are going to have long base rtt, so you need to pay much better attention to how you handle the packets.  If a dog is toothless, ehhh...  who cares if he gums you a little bit.  But, if that dog has REALLY big, razor sharp teeth (Server in New York, the user is in India) then you REALLY need to pay attention or you'll end up without a leg.  Maybe a bit too visual of an analogy, but do you get it now?

In networking, it's popular to think that you only really need bandwidth graphs to be able to understand how to provision your network.  That's just plain false.  Knowing bandwidth utilization is important (95th percentile, 70%+ utilization over time with the expectation of that continuing into the future, upgrade the site), but it wont tell you what people are observing from the network at the site by itself.  You need bandwidth graphs and ping (rtt) graphs if you want to know how useful the network is to the people using it.

Normal ping graphs are absolute and need to be translated to non-network engineers.  Weialgo is a relative scale model and is designed to not need translation.  Anyone should be able to look at a weialgo graph and say "this looks good", or "this looks bad".  Because, it's the variation that matters at a point, not the absolute round trip time.

Well, rtt *does* matter, but I live in the real world.  Base rtt is a fact of life.   No one has invented a faster photon/electron, and moving everyone on the planet onto a small island so that base rtt never exceeds 10ms is a terrible idea.  So, distance is the rule, not the exception, and must be factored into everything IT, app design, networking management, and setting end user expectations.

But, because base rtt combined with jitter has a multiplicative effect on the performance of a interactive application, its impact needs to be represented in the graph.  Absolute rtt graphs do not do that, and non-network engineers don't understand this, nor do they get the idea when looking at a absolute rtt graph.

This point deserves some emphasis.  Take 3 identical server/client systems, one with 0ms base latency and 300ms jitter, one with 150ms base latency and 150ms jitter, and one with 300ms latency and 0ms jitter.  The 150/150 system will have the worst interactive performance.  I should detail this in a separate blog post some other time, but I'll stand on this statement at this time.  I have alot of experience with this particular issue of interactive applications over Internet connectivity, jitter by itself is bad, latency by iself is bad, but combine the two and people are out to shoot you because their Citrix sessions wont stay connected or because you wiped the raid.

Personally, I like packeteers in dq mode for doing this type of testing, but Linux has a similar latency simulation capability (since I always have a couple packeteers around, I have only used the Linux method once, you'll have to google it).  Prove this to yourself.  Take your favorite interactive app and start messing with different combinations of base latency + jitter = some set amount.  The reasoning for all of this has to do with how TCP works, but, you don't have to go that far into it.  Seeing it will be enough to get the idea.

The weialgo graphs attempt to take this fact and put it into a visual form that any lay person should be able to see and understand.  That's why weialgo and the format of the weialgo graphs.

Hopefully, my next article will be on the weialgo graphing scripts and reporting process.

Rob

Monday, December 17, 2012

20121217 Why Weialgo? Essay #1


I have two articles that I've already typed up but I haven't put them on the blog yet, mainly because they're so rambling that even I have a hard time understanding them.  One is on the graphing and reporting of weialgo data, the other on "Why Weialgo", which is more of me complaining about why this concept is so hard for some people to grasp /facepalm type of thing.

But, something happened over the weekend (thank you Lord, as the outage has been perfect to study and caused me to write the email in the first place) which caused me to write an email which did a better job of being an intro to "Why Weialgo" than my war and peace blog entry which goes over the same subject.

My Internet connection, for all intents and purposes, went down.  It's a wireless connection to a line of sight tower several miles away.  The "old" wireless connection was notoriously unreliable, but, it's failure mode was acceptable in that I was able to put a TCP tunnel between the house and the ISP using a Linux VM in their DC, and compensating for the loss that way.  When they installed the "new" wireless connection a year or so ago, network paradise was created.  It was so good that I had a hard time coming up with reasons why a copper/fiber connection would be better.

That was until last Friday.

Now, when you read this, yes, there is a weialgo graph.  But there's a more important undercurrent philosophy/process/knowledge going through this.  It's about TCP, and how it operates over a network.  And since TCP is how 95%+ of everything in data communications is done nowdays, a deep knowledge of TCP is key to making all, ALL, *-!ALL!-*, IT systems work.  If you don't understand how your systems communicate, you'll never understand why they don't work (or what they'll need to be made to work).

Admittedly, some of my explanations of TCP behavior are sometimes a bit simplistic, but the essence is true. Wireless is hard on TCP, that's just the fact of it.  Weialgo can provide a view of this relationship between wireless and TCP.


The email, lightly edited:


subject: failure mode of the *isp*.net internet link


Rob Luce <blah@blah.com>

6:09 AM (13 minutes ago)

to Todd, Ben 

I know you think I'm nuts, but seeing this graph makes me profoundly...   sad.




This is what the wireless link has been doing about 15 minutes out of every hour or so since last Friday.  This is worse than the old link.  It's also exceptionally educational at the same time.


The old wireless line would go down often, but would normally stay down for a matter of seconds (for the sake of discussion, say 5 to 10 seconds on average), then come back up for a matter of seconds (5 to 10 seconds on average) and "flap" like that while it was having issues.  When it came back up, it would stay up long enough though that the TCP tunnel between us and the ISP could clear it's entire queue of packets (which I set to 512 kByte), then requeue while the link was down.  BUT!!!  The old line would only hit TCP once before coming back up for a statistically significant amount of time (For an android, 5 seconds is nearly an eternity).  This would cause only one halving of the TCP window, as there was only "one" outage to compensate for from TCP's point of view, after which, TCP would "quickly" ramp back up to the full window so that the queue could clear out.


This new wireless failure mode is the death of TCP congestion control, and it details everything that's wrong with the IETF's insistence that dropped packets are to be treated as congestion, instead of what they are more likely to be nowdays - indications of a wireless network inline with the flow of communications.


Since both sides of the TCP tunnel, in this case, are Linux boxes, there are some parameters that are available that otherwise wouldn't be, I've tried to adjust as much as possible to compensate for the frequent drops in the network (BIC mainly, most of the other options aren't of much use in this case).  But, there's no getting around the "birdshot" pattern of packet loss that this link is having.


Every time TCP sees a dropped packet, it goes into "congestion recovery", but here, there is no congestion.  Normally, this means halving the cwnd, and eventually falling back to slow start.  Every one of those red strikes means the cwnd getting halved, and eventually falling back to slow start (mss * 3) (http://tools.ietf.org/html/rfc3782 http://tools.ietf.org/html/rfc2581).


"Down" and then "Up" (or "flap"), I can try to compensate for using a TCP tunnel with custom tweaking.   With "Shot full of holes like birdshot" (or "flutter"), there isn't really anything that can be done using TCP.  I'd have to write my own tunneling software using UDP as the carrier, with a set transmit window, and add something like a full SACK map to cover all in-flight packets.  Of course, there would be some benefits to this, but that level of coding isn't part of my normal skill set.





Every one of those red strikes on the weialgo graph that I've inspected has the exact same pattern.  It's the "TCP Killer Birdshot" or "flutter" pattern.   Which means, not much can be done about it using off the shelf technology.


Hopefully, this is something that *isp*.net finds this week of their own accord.  But, if you would, please call and let them know that the link has been performing strangely since last Friday.  


The ping plotter is using 28 byte packets at 100 ms.  The weialgo graphs are of the first hop router http://192.168.0.20/1.1.1.1.html after I determined that the issue wasn't being caused by the vm.  The vm graphs go back farther in time.  http://192.168.0.20/1.1.1.146.html


Moo

Rob



Sitting here typing/cutting-and-pasting this post, I'm having to watch the pingplotter between me and the ISP so that I know when to click "Save".  (By the way, pingplotter should be mandatory for every network person, and I wish they had an app for Linux.)



Wow, that is ugly.  Strangely enough, I tend to try to find ways to make things work.  If all you have is binder twine, grey tape, and determination, could you make a phone?  I have a tendency to say "you should really just go buy a couple phones and some phone wire", then go right to work making two cups on a string.

But, looking these graphs, I can't imagine how I'd make something work over that.  Change the tunnel to UDP, make a SACK map out of everything to make the tunnel reliable, and retransmit the individual TCP-in-UDP packets like a machine gun until the ISP gives up and fixes the link.  There would be alot of benefit to this in wireless environments, but, I'm a hack scripter, not a real coder. 

What do you think?

Thursday, December 6, 2012

20121207 Weialgo/NMap, random notes

Just a couple notes on nmap and how it could be modified for the purpose that I'm using it for.

First, nmap could be changed to do polling with protocols other than ICMP.  There's no reason a TCP SYN/ACK poll couldn't be done, or some form of a UDP poll.  Having said that, it wouldn't normally be good form to use TCP or UDP for the purposes of determining pure network availability and rtt measurement.  Particularly if you're polling once a second.

Hitting a SSH port with a TCP SYN/ACK/RST poll, once a second, wouldn't create a DOS, I don't care what the newb geek in IT Security says.  It would work just fine, and nothing designed since 1993 would have an issue with it.  But, it would be considered poor form to use it in normal use (mostly because the newb geek in InfoSec would go all spazzy on you for doing it, even though he wouldn't have a foot to stand on in a real discussion about it).

So, you wouldn't have to do ICMP, you -*could*- do a "ping" through a firewall using a TCP poll, which nmap is expert at.  But, it would be poor form, normally.

The next thing is, I am not paid to do pings.  Yep, this is not my job, this is just one of my hobbies that I do at home (most of the time).  Yes, I consider this fun.  Actually, this type of stuff is a blast.

But, no, I do not get paid to do this.  But I let my hobbies benefit my work if at all possible.  :-)

The box that I'm using for this at my house is a Intel Atom D2700, which works absolutely fantastic for this.  I'm not pinging as much stuff as I am at work, but I dare say the Atom is faster than the throw-away box that I'm using at work (actually, that was meant to be a joke, but now that I think of it...).

So, 2 or 3 hundred bucks will get you a fantastic weialgo box.  The Atom boards are yet another thing that have an undeserved bad reputation.  Very respectable speed at 10 or 13 watts (I pay my home electric bill, so details like this are important to me).

I should do a rant about performance to watt comparisons at some point, but, that'll be a few blog posts from now.

20121206 nmap.... /facepalm


So, I've been pinging hundreds of devices concurrently for years with a series of hand-me-down servers at work and home made C or Perl programs.  And, for years, I've sat and tried to figure out a way around the high process count issues with my C or Perl attempts at doing mass, continuous ping polling.

Earlier this year, I had someone ask one of those network questions that they thought should be so simple to answer.  It made me sit down and take another hard look at re-writing weialgo again in C.

To answer the question, I was going to need to ping thousands of devices, concurrently, every second if I could.  I quickly determined that there wasn't a way to do it in Perl.  I pretty much came to the conclusion that C was the only answer.  I thought I was going to need to start by creating a memory mapped stream of the raw icmp packets that I would need to copy out to the network driver directly.

My first attempts at this didn't go so well.  So, I googled, changed tactics, got some success, but still had a host of issues.

Frustration led to me reviewing the normal "ping" alternatives, hping, sing and the like.  Nothing gave me the impression that it would be even close to doing the job, and most of those utilities didn't record times with any level of accuracy.

So, I'm sitting in the lab, chin in my hands looking at the server screen wondering how I'd ever do this.  Don't ask me what made me think of it, I don't remember, but I got one of those "I wonder if nmap will do this?" random thoughts...   When I thought of it, I didn't seriously think nmap would do the job, but I pulled up the --help and started reading.

Long story short, everything I had been attempting to do with weialgo, if I didn't want to do red/green alerting, was BLINDINGLY simple with nmap.  Jean-Luc Picard /facepalm simple.  Several days spent in denial, simple.  OMGWTFwasIthinking simple.  Sitting here typing this, I still want to shake my head  at how stupidly easy it is with nmap.

Soooooo....  Here it is.


nice --adjustment=-10 nmap -n -sP -PN -PE -v --max-retries 0 --max-rtt-timeout 1000 --min-rtt-timeout 900 --initial-rtt-timeout 900 --max-rate 20000 -oX /mnt/ramdisk/$2.xml -e eth0:$3 -iL /weialgo5/weialgo5.lst >> /dev/null


$2 is the yearmonthdayhourminutesecond (eg: 20121207015611), and $3 is obviously specifying the network subinterface to get around polling overlap issues, but everything else is pretty straight forward.

Let me break it out.

(nice --adjustment=-10)  The nice adjustment is obviously an attempt to get around any incurred system latency if at all possible.  I'm trying to get accurate sub-millisecond measurements out of the Linux box I'm running this on, so nmap needs to have a higher priority than the basic system applications.

(nmap -n -sP -PN -PE -v --max-retries 0 --max-rtt-timeout 1000 --min-rtt-timeout 900 --initial-rtt-timeout 900)    Ping.  Yep, that's all it is really.  Ping once, and wait up to 1 second (1000 milliseconds), don't ping multiple times, just once.

(--max-rate 20000)  No more than 20,000 ping packets in a single second.  So, 224 bits per packet * 20,000 packets in a second = maximum of 4,480,000 bits per second on the network.  Now, this is a maximum, and depending on how the network is laid out, the likelihood that these pings would go over the same network links is pretty close to zero.  --min-rate would be another good option, but it's not something that I need in the network I'm in.

(-oX /mnt/ramdisk/$2.xml)  Save off the results to a xml file for parsing.

(-e eth0:$3)  Round robin through a series of network subinterfaces to keep concurrent polling from giving false returns between the different instances of nmap.

(-iL /weialgo5/weialgo5.lst)  List of devices to ping.

Once you have this, the rest is academic.  Really.  Freakin' nmap.  With this, you could ping 10's of thousands, maybe hundreds of thousands of devices, every second if you wanted to.  Someone will need to try to ping 100,000 unique devices and tell me if it works.

The reason I've talked this long about this is, you would not believe how many times I went to bed thinking "There's gotta be a better way to ping devices and record accurate round trip times...".  Freakin' nmap.  /facepalm

Anyways, it's all downhill from here.  I'm just using a set of simple scripts down to fire off the nmaps, parse the data, and save it off for reporting later.

The box has 4 network subinterfaces, eth0:1, :2, :3, and :4.  This allows 4 seconds between consecutive nmaps on the same subinterface.

I have a script calling a script which runs the nmap.  Basic stuff.


#!/bin/bash
#/weialgo5/weialgo5.sh

while :

  do

TARBZ=$(date --utc +%Y%m%d%H)
TARBZSEC=$(date --utc +%Y%m%d%H%M%S)
/bin/bash /weialgo5/weialgo5child.sh $TARBZ $TARBZSEC 1 &
sleep 1

TARBZ=$(date --utc +%Y%m%d%H)
TARBZSEC=$(date --utc +%Y%m%d%H%M%S)
/bin/bash /weialgo5/weialgo5child.sh $TARBZ $TARBZSEC 2 &
sleep 1

TARBZ=$(date --utc +%Y%m%d%H)
TARBZSEC=$(date --utc +%Y%m%d%H%M%S)
/bin/bash /weialgo5/weialgo5child.sh $TARBZ $TARBZSEC 3 &
sleep 1

TARBZ=$(date --utc +%Y%m%d%H)
TARBZSEC=$(date --utc +%Y%m%d%H%M%S)
/bin/bash /weialgo5/weialgo5child.sh $TARBZ $TARBZSEC 4 &
sleep 1

  done

And, the child script is pretty easy to predict.


#!/bin/bash
#/weialgo5/weialg5child.sh

echo 'timestamp_utc,'$2 >> /mnt/ramdisk/$2.txt
nice --adjustment=-10 nmap -n -sP -PN -PE -v --max-retries 0 --max-rtt-timeout 1000 --min-rtt-timeout 900 --initial-rtt-timeout 900 --max-rate 20000 -oX /mnt/ramdisk/$2.xml -e eth0:$3 -iL /weialgo5/weialgo5.lst >> /dev/null
/usr/bin/perl /weialgo5/xmlweialgo5.pl /mnt/ramdisk/$2.xml >> /mnt/ramdisk/$2.txt
cat /mnt/ramdisk/$2.txt >> /mnt/ramdisk/weialgo5log$1.txt

rm -f /mnt/ramdisk/$2.xml
rm -f /mnt/ramdisk/$2.txt


Simple stuff.  Just kick off an nmap, roughly once a second, and ping a thousand devices, and save off the results to an xml file.

Why the xml file (and the perl program to parse it)?  nmap saves the rtt in a microsecond format in the xml file rather than a fraction of a second or millisecond format.  The other output formats seem to use fraction of a second or milliseconds instead.  Not a big deal, but, if you can get microsecond resolution, why not?  Also, xml is fairly easy to parse through.

The next program is the perl script.  I don't need anything other than the IP address and the rtt, so I strip out everything else, and save the result to a text file.


#!/usr/bin/perl
#!/weialgo5/xmlweialgo5.pl

use strict;
use warnings;

my $file = $ARGV[0];
my $line;
open my $fh, $file or die "Could not open $file: $!";
my $ipaddress = "error";
my $rttime = 2;
my $x001;
my $x002;
my $x003;
my @foo;

while( $line = <$fh>)  {

        $x001 = $line;
        chomp $x001;
        $x002 = substr ($x001,0,6);
        if ( $x002 eq "<host>" ) { $ipaddress = "error"; $rttime = 2; }
        $x002 = substr ($x001,0,12);
        if ( $x002 eq "<times srtt=" ) { @foo = split ("\"", $x001); $rttime = $foo[1] / 1000000; @foo=(); }
        $x002 = substr ($x001,0,14);
        if ( $x002 eq "<address addr=" ) {
                @foo = split ("\"",$x001);
                $x003 = substr ($foo[3],0,3);
                if ( $x003 eq "ipv" ) {
                        $ipaddress = $foo[1];
                }
                @foo=();
        }
        $x002 = substr ($x001,0,7);
        if ( $x002 eq "</host>" ) { print "$ipaddress,$rttime\n"; }

}



close $fh;


This keeps the actual data that I'm storing off fairly small, and easily compressible.

Freakin' nmap.....   /facepalm    Thank you fyodor.

My next post will be on rolling up the data, reporting and generating the graphs.

Friday, November 30, 2012

20121201 I never claimed my hacks were pretty.... Weialgo version 2 and 3.

I looked through my "stuff" and haven't found a version of weialgo older than 2008 here at home.  I probably have older versions squirreled away at work, but this seems to be the oldest version I have at home.


#!/usr/bin/perl

#Weialgo version 2

use Net::Ping;
use Time::HiRes qw (usleep gettimeofday);
use strict;
#use warnings;

my $host = $ARGV[0];
my $hostname = $ARGV[1];
if ( $host == "" ) { print "\nno IP to ping $ARGV[0] $ARGV[1] $ARGV[2]\n\n"; exit;}
open(LOG, '>>/mnt/ramdisk/v2logfile.csv');
select(LOG); $| = 1;
close(LOG);
select(STDOUT); $| = 1;
my ($seconds, $microseconds) = gettimeofday();
my $prevseconds = $seconds;
my $starttime = $seconds;
srand($microseconds);
my $offsetms = int(rand(1000000));
usleep(1000000-$microseconds+$offsetms);
my $down = 0;
my $totaldown = 0;
my $transitions = 0;
my $totaltime = 1;
my $i = 0;
my $j = 0;
my $ret = 0;
my $duration = 0;
my $ip = 0;
my $runtime = 0;
my $sentpackets = 0;
my $meetsla = 0;
my $minsla = 100000;
my $sec = 0;
my $min = 0;
my $hour = 0;
my $mday = 0;
my $mon = 0;
my $year = 0;
my $wday = 0;
my $yday = 0;
my $isdst = 0;

my $p = Net::Ping->new("icmp");
$p->hires();

while ( $i==0 ) {

  ($seconds, $microseconds) = gettimeofday();
  ($ret, $duration, $ip) = $p->ping($host, 0.6);
  $runtime = $seconds - $starttime;
  $sentpackets++;
  if ( $ret == 0 ) {
    if ( $down == 0 ) {
      open(LOG, '>>/mnt/ramdisk/v2logfile.csv');
      printf LOG ("$seconds,$host,$hostname,$runtime,$totaltime,$transitions,$totaldown,1,%.2f\n", 1000 * 10);
      close(LOG);
    }
    $duration = 10000;
  }
  if ( $ret == 1 ) {
    if ( $down > 1 ) {
      $j = $seconds - $down + 1;
      $totaldown = $totaldown + $j;
      $seconds--;
      open(LOG, '>>/mnt/ramdisk/v2logfile.csv');
      printf LOG ("$seconds,$host,$hostname,$runtime,$totaltime,$transitions,$totaldown,$j,%.2f\n", 1000 * 10);
      close(LOG);
      $seconds++;
    }
    open(LOG, '>>/mnt/ramdisk/v2logfile.csv');
    printf LOG ("$seconds,$host,$hostname,$runtime,$totaltime,$transitions,$totaldown,0,%.2f\n", 1000 * $duration);
    close(LOG);
    $duration = $duration * 1000;
  }
  if ( $duration < $minsla ) { $minsla = $duration; }
  if ( $duration < 10000 ) { if ( $duration < ( $minsla + $minsla + 50 ) ) { $meetsla++; } }
  ($seconds, $microseconds) = gettimeofday();
  if ( $microseconds < $offsetms ) {
    $j = $microseconds + 1000000;
    $microseconds = $j;
  }
  $j = 1000000+$offsetms-$microseconds;
  if ( $ret == 1 ) {
    $j = $j + 5000000;
    $totaltime = $totaltime + 6;
    if ( $down > 0 ) { $transitions++; }
    $down = 0;
  }
  if ( $ret == 0 ) {
    $j = $j + 1000000;
    $totaltime = $totaltime + 2;
    if ( $down == 0 ) { $transitions++; $down = $seconds}
    if ( -e "/mnt/ramdisk/pingslow.txt" ) {
      $j = $j + 4000000;
      $totaltime = $totaltime + 4;
    }
    #if ( $seconds - $prevseconds > 3 ) {
    #  $j = $j + (( $seconds - $prevseconds - 2 ) * 1000000 );
    #}
  }
  $prevseconds = $seconds;
#  print "$j,$down\n";
  usleep($j);
  if ( -e "/mnt/ramdisk/pingflag.txt" ) {
    $i = 1;
    $j = 0;
    if ( $ret == 0 ) {
      $j = $seconds - $down + 1;
      $totaldown = $totaldown + $j;
      open(LOG, '>>/mnt/ramdisk/v2logfile.csv');
      printf LOG ("$seconds,$host,$hostname,$runtime,$totaltime,$transitions,$totaldown,$j,%.2f\n", 10000);
      close(LOG);
    }

    ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst)=localtime(time);
    open(LOG, '>>/storage/weialgo/rollups/v2sla.csv');
    printf LOG ("%4d-%02d-%02d %02d:%02d:%02d,$seconds,$host,$hostname,$minsla,$sentpackets,$meetsla,%.2f\n",$year+1900,$mon+1,$mday,$hour,$min,$sec, $meetsla / $sentpackets * 100);
    close(LOG);
    sleep(5);
  }

}

$p->close();


This seems to be the last version of version 2 weialgo that I put together.  Yes, it was written in Perl, mainly because that I didn't see much improvement in doing in a native binary in C.  Perl was much easier to update, and seemed to have similar performance.

I'm not going to bother including or describing the reporting modules.  Most of that was done in shell scripts or perl. You can use perl/sed/awk/grep/sort to go through the information that this script provides and get to some very useful information if so inclined.

Net::Ping and Time::HiRes are both CPAN Perl modules that do most of the magic.  Reading through the perl script, you can get an idea how those modules work, or you can go out to the Perl website and read up on the modules directly.

The thought process for this version was a carry-over of the original version, which was more for Red/Green alerting, with the side benefit of being able to report on data with a number of different statistical models.

At the time I was running upwards of 200 or so pings to individual systems/routers, which was about the limit of what this process was capable of.  It was a simple enough "/usr/bin/perl ./weialgo2.pl 10.1.1.1 myserver.com &" to get it started, and let it run continuously.

As I remember, I would let the program ping the device once every 6 seconds, and record the round trip time and the other data I though relevant at the time.  If the ping dropped, or didn't return in time, I changed the ping time to once a second, until the device responded again.

This last part was the "Weippert Algorithm".  It basically goes like this.  Decide how fast you want to know that a device is down (say, 60 seconds).  Divide that number in half, subtract 1, round down (29).  Using a fast, low bandwidth, low cpu status protocol (like ICMP ping, or TCP SYN/ACT port open check), check the device every int(X/2-1) seconds (29).  The moment the device misses a poll, check the device every second until it comes back up.  If the device misses/drops int(X/2) polls, declare the device down and send an alert.

(If you know Scott Weippert, you can tell him how brilliant he is.  :-) )

The Algorithm is simple, and very effective for Red/Green alerting.  Unfortunately, I couldn't convince anyone I worked with that it was better than SNMP for Red/Green (it's MUCH better, SNMP is a crappy system for basic availability alerting, from a network point of view, but I haven't won that fight yet).

As I worked through the different versions of Weialgo, I found that I was using Weialgo more for statistical reporting, and not for Red/Green.  From a statistical viewpoint, the int(X/2-1) aspect of weialgo complicates statistical reports quite a bit, as it means that all data has to be time indexed as part of the reporting process.  It's much easier to just ping every second or two, and report on the data using that assumption.

The main drawback to this version (and any other process-per-device based polling) is that it drives up the number of concurrent processes on the polling server.  On the P4 I was running this on, at around 200 instances of weialgo, the amount of incurred latency based on just process switching within Linux began to throw off the results.  CPU utilization would normally bounce off 100% continuously, and the box was useless for anything else.  So, I normally tried to keep the number of polled devices much less than 200, usually around 100, so that I could do some simple reporting on the same box against the data.

At one point in time, earlier versions of Weialgo would email out to a pager every time a device went down.  That lasted for a couple weeks (a router would lose it's E1 for 10 seconds at 2am in the morning, PAGE THE NETWORK TEAM!!!!  I wasn't very popular for a couple weeks.).  Adding that functionality back into the perl script would be easy enough to do.

Obviously, I'm over-reporting information in the log files, but the total amount of data is minor in my opinion even with the extra data.  gzip/bzip2 the log files after reporting on them, and you can keep decades of data in a few gigabytes.

Also, since I put all of the data into the same log file, it's possible to have concurrency problems, depending on the version of *nix you want to run this on.  I never had a problem on Linux, but Solaris tended to throw a mangled line in the log file every once in a while.  If you have problems with concurrent processes writing to the same logfile creating mangled entries, split the logfiles up by renaming the logfile with the device name.

$offsetms was put in because of the number of concurrent processes I was running.  I didn't want all of the pings to go out the same exact microsecond.  So, using $offsetms, I randomized the start time to different microseconds for each process.  This spread out the pings, and the processing.

To try this script out, it should be fairly simple.  You'd need Linux (or your favorite version of *nix), Perl, and CPAN install Net::Ping and Time::HiRes.  Copy and paste the script into a .pl on the box, and run the script with the ip address and hostname you want to ping.  Sit back and watch it ping the device until then end of time, or until the server is rebooted.

Oh, as you can probably see from the script, I'm a fan of RAM disks for transient data like this.  Since the log file isn't held open by the processes, you can mv the log file at any time and the processes will keep running.  So, rather than doing hundreds of individual writes to a single file every second on a hard drive, I do the "spam" individual writes to a file on a RAM disk, the mv/report the logfile data via a cron process every hour or so.  Saves wear and tear on the hard drive, and speeds up everything overall.  And, frankly, if I lose an hour or so of pings, no big deal.  RAM disks are another basic IT staple (like ping) that has been lost to an unearned bad reputation.

My next post will be on the "current" version of Weialgo, version 5.  Version 4 and 5 were both based on two issues with the previous versions.  #1  Process per device polling limited the scalability of weialgo to a few hundred devices per polling server.  #2  Red/Green alerting wasn't needed, I was only using the data to run reports and create graphs.

Version 4 was an attempt to create a single process that polled multiple devices (Version 4 never worked properly).  Version 5 was a complete re-write (eg: I threw out all of my previous work) when I discovered that there was a much easier way to ping thousands of devices.  It was a bit of a /facepalm moment.

Here's a copy of one of my Version 3 scripts.  Obviously, I had yanked out all of the int(X/2-1) logic, and I'm just logging straight pings to simplify statistical reporting.


#!/usr/bin/perl

# Weialgo version 3

use Net::Ping;
use Time::HiRes qw (usleep gettimeofday);
use Time::Local;
use strict;
#use warnings;

my $host = $ARGV[0];
my $hostname = $ARGV[1];
if ( $host == "" ) { print "\nno IP to ping $ARGV[0] $ARGV[1] $ARGV[2]\n\n"; exit;}
my $sec = 0;
my $min = 0;
my $hour = 0;
my $mday = 0;
my $mon = 0;
my $year = 0;
my $wday = 0;
my $yday = 0;
my $isdst = 0;
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = gmtime(time);
$year += 1900;
$mon += 1;
$mon = sprintf("%02d", $mon);
$mday = sprintf("%02d", $mday);
open(LOG, ">>/tmp/v3logfile_$hostname\_$year$mon$mday.csv");
select(LOG); $| = 1;
close(LOG);
select(STDOUT); $| = 1;
my ($seconds, $microseconds) = gettimeofday();
my $starttime = $seconds;
srand($microseconds);
#my $offsetms = int(rand(1000000));
#my $offsetms = 100;
my $offsetms = 1;
usleep(1000000-$microseconds+$offsetms);
my $totaldown = 0;
#my $totaltime = 1;
my $i = 0;
my $j = 0;
my $ret = 0;
my $duration = 0;
my $previousduration = 2;
my $ip = 0;
my $runtime = 0;
my $sentpackets = 0;

my $p = Net::Ping->new("icmp");
$p->hires();

while ( $i==0 ) {

  ($seconds, $microseconds) = gettimeofday();
  $j = 1000000+$offsetms-$microseconds;
  usleep($j);
  ($seconds, $microseconds) = gettimeofday();
  ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = gmtime(time);
  $year += 1900;
  $mon += 1;
  $mon = sprintf("%02d", $mon);
  $mday = sprintf("%02d", $mday);
  ($ret, $duration, $ip) = $p->ping($host, 0.5);
  $runtime = $seconds - $starttime;
  $sentpackets++;
  if ( $ret == 0 ) {
    $totaldown++;
    $previousduration = $previousduration + 0.2;
    if ( $previousduration > 2 ) {
      $previousduration = 2;
    }
    open(LOG, ">>/tmp/v3logfile_$hostname\_$year$mon$mday.csv");
    printf LOG ("$seconds,$host,$hostname,$runtime,$microseconds,$sentpackets,$totaldown,1,%.2f\n", 1000 * $previousduration);
    close(LOG);
  }
  if ( $ret == 1 && $duration > 0 ) {
    open(LOG, ">>/tmp/v3logfile_$hostname\_$year$mon$mday.csv");
    printf LOG ("$seconds,$host,$hostname,$runtime,$microseconds,$sentpackets,$totaldown,0,%.2f\n", 1000 * $duration);
    close(LOG);
    $previousduration = $duration;
  }
  if ( -e "/tmp/pingflag.txt" ) {
    $i = 1;
  }
  if ( $microseconds > 50000 ) {
    sleep (1);
  }
  if ( $microseconds > 100000 ) {
    sleep (1);
  }
  if ( $microseconds > 150000 ) {
    sleep (1);
  }
  if ( $microseconds > 200000 ) {
    sleep (1);
  }
  if ( $microseconds > 250000 ) {
    sleep (1);
  }
  if ( $microseconds > 300000 ) {
    sleep (1);
  }
  if ( $microseconds > 400000 ) {
    sleep (1);
  }
  if ( $microseconds > 500000 ) {
    sleep (1);
  }

}

$p->close();