Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - dextaslab

Pages: 1 2 3 [4] 5
46
Feature requests & roadmap / Re: Monitoring
« on: September 05, 2011, 10:57:39 am »
I think we need to see this larger then only the server. I agree that if you just want a cpu graph, you can do it manually with hand-written scripts/files/...
But since linuxmce provides a lot more (dhpc, dns, proxy, firewalling...), we can take this up a higher level. If you put in tools like cacti, you can start monitoring your whole network/house. Think about all computers, switches, routers, power-meters... The key-element here is the user-friendly stuff. People need to be able to do it themselves, simple. With this, you gain that more people will use it, less people will ask you questions...
And it must be flexible...

LOL I see now, your looking for a magical do everything gui that records EVERYTHING you want with virtually no scripting/command prompt. Bwahahahahahah!!

The solution I provided (if you read the bash script) is generic, it'll work with ALL linuxmce systems (testing is needed) and if accepted by the Linuxmce gods, you do virutally nothing just install the package, and click on links in webadmin... It also does as you wanted; Intergrates into Linuxmce backend, looks through your database to finds how many MD's you have, and if they have HDD's it adds them to the graphing list.

"larger then only the server" - I wrote the script in about a day, and it is.
"And it must be flexible" - It is, I have a graph that shows downloaded weather reports and compares them with HDD temperatures.
"Think about all computers, switches, routers, power-meters" - it works with all snmp enabled devices, and those that you can generate scripted statistics. Again... written in about a day.

You've obivously just discovered cacti, and haven't had much experience using other related software... I have used most, and when your talking about flexibility and low overheads MRTG is better. I thought as a network engineer, you'd have used MRTG, as most larger networks do.

And with that I say "Your welcome", "Keep dreaming", and by all means prove me wrong with your working demo.

47
Feature requests & roadmap / Re: Monitoring
« on: September 05, 2011, 12:45:04 am »
You can set thresholds with mrtg; example:

ThreshDir: /path/to/mrtg/thresh
ThreshMaxI[performance]: 1
ThreshMaxO[performance]: 1
ThreshProgI[performance]: /path/to/mrtg/scripts/alert.php
ThreshProgO[performance]: /path/to/mrtg/scripts/alert.php

For the alert scripts , I'd be looking at something like this, which sends orbiter notifications: http://www.modlog.net/?p=109

48
Feature requests & roadmap / Re: Monitoring
« on: September 04, 2011, 02:39:02 pm »
After editing the bash script the only thing left to do is either manually install mrtg:
# apt-get install mrtg

edited " /var/www/lmce-admin/operations/myDevices/editDeviceParams.php"

...
  </script>

        <fieldset>
+            <a href="./mrtg/'.$deviceID.'/index.html">Graphs If Available</a>   
        <legend>'.$TEXT_DEVICE_INFO_CONST.' #'.$deviceID.'</legend>
        <table border="0">
...

(I couldnt get the test to see if the file exists so it'll show on all devices not just MD's)
http://modlog.net/temp/monitoring.png

Only thing left is to integrate it into the core scripts that are used when creating MD's and clean it up, maybe add more graphs, etc.. Any one got any interest in it?

49
Feature requests & roadmap / Re: Monitoring
« on: September 04, 2011, 10:44:34 am »
I have spent some more time on it, again, it's dirty bash coding, but makes it show it's very possible (and half done), it can also integrate into the webadmin to show for example disk usage graphs while selecting a raid drive, or CPU usage graph when an MD is selected, etc..

Will put more time into it later next week, and put it forward to see if anyone thinks it's any good or worth continuing.

50
Feature requests & roadmap / Re: Monitoring
« on: September 03, 2011, 06:42:18 pm »
I don't agree, cacti is installed in about 2 minutes. Without the need of knowledge for any scripts, and cacti does a lot more then mrtg (services, tresholds, syslogging, weathermaps...). The user interface is so great, that you don't want anything else. When i explain my colleagues/customer cacti in 15 minutes, they're off with it, without any linux knowledge! And resources? If you start with 'own script's, at the end, your resources will be must higher then with cacti.

For me? When you want that people use 'your' system, you need to give them something that's simple.
Once they need to start changing config files, they are gone/lost. That's the only reason why windows gains above linux for the normal computer user.  :-X


But of course,everybody his choice. ;)


I think maybe we're not quite talking about the same thing... The scripts are written to be generic as a '1 shoe fits all' once deployed they don't even have to know they are there.

This is a proof of concept to auto create a MRTG config file http://www.modlog.net/temp/MRTG_config.sh
example output:

# /usr/pluto/bin/MRTG_config.sh
WorkDir: /var/www/mrtg
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt,/usr/share/snmp/mibs/TCP-MIB.txt
EnableIPv6: no

#-----------NEW DEVICE--------------
Target[HDD.174]: `/etc/mrtg/disk_usage.sh` /dev/sdb
MaxBytes[HDD.174]: 100
Options[HDD.174]: gauge,nopercent,pngdate
Title[HDD.174]: Disk usage for /dev/sdb
PageTop[HDD.174]: <h1>Disk usage for /dev/sdb</h1>
YLegend[HDD.174]: Percent
ShortLegend[HDD.174]: %
Legend1[HDD.174]: Total Space
Legend2[HDD.174]: Used Percentage
LegendI[HDD.174]: Total
LegendO[HDD.174]: Free

#-----------NEW DEVICE--------------
Target[HDD.173]: `/etc/mrtg/disk_usage.sh` /dev/md0
MaxBytes[HDD.173]: 100
Options[HDD.173]: gauge,nopercent,pngdate
Title[HDD.173]: Disk usage for Software Raid 5
PageTop[HDD.173]: <h1>Disk usage for Software Raid 5</h1>
YLegend[HDD.173]: Percent
ShortLegend[HDD.173]: %
Legend1[HDD.173]: Total Space
Legend2[HDD.173]: Used Percentage
LegendI[HDD.173]: Total
LegendO[HDD.173]: Free


51
Feature requests & roadmap / Re: Monitoring
« on: September 03, 2011, 04:20:07 am »
Rather than running cacti, I put forward using plain old mrtg as impementing scripts will be easier and lighter, as well as making changes/updates;

example;
-- create core mrtg.cfg --
do for each hdd on core (or mount points) from database - insert basic template filling details.
do for each NIC on core in database - insert basic template filling details.
Insert standard free memory template
Insert standard process template
etc..

--Create MD mrtg.cfg's --
do for each MD in database
   do for each hdd in current MD - insert basic template filling details.
   do for each NIC in current MD - insert basic template filling details.
   Insert standard free memory template
   Insert standard process template
   etc..
loop
 
*Advantages:
 As long as mrtg is installed on all MD's the script just has to create and place/replace the mrtg.cfg on each MD (/usr/pluto/diskless/<md#>/etc/mrtg.cfg
 You just have to create an appropriate place on the core that is mounted on all MD's to store the image and html files; /home/public/data/mrtg ?

52
Users / Re: Timed Events Happening all days
« on: September 02, 2011, 12:35:33 am »
I have the log from last Friday and you can see that at 11pm it turns off 2 MD's, the friday tick box is unticked in the timed event.
http://modlog.net/temp/power_log.txt

53
Users / Re: Timed Events Happening all days
« on: August 31, 2011, 01:57:14 pm »
Yeah I checked out the webadmin php code and found that too. Deleted existing, and recreated to see if it still occures. If it does I'm guessing its a bug?

54
Users / Timed Events Happening all days
« on: August 27, 2011, 01:28:36 am »
Hi All,

I have had a timed event setup for awhile, it's setup to turn off MD's during the week but not on Friday and Saturday.
http://modlog.net/temp/timed.png
Sometimes it turns off on Friday and saturday nights. Are the settings saved as a cron job or similar that I can check?

55
Users / Re: NEWS: rc Time
« on: August 23, 2011, 11:00:01 am »
Excellent News, and huge thanks to ALL who have helped :-)

On another note, I'm guessing mirror server is getting hammered with people downloading the new ISO, I'm giving it another go because it got to 2.8GB and stopped, perhaps a torrent might be a better way to share the load? I also have some space on my hosted server to act as a mirror if interested?

Oh, and I'm running 10.04 on the core so I guess this doesn't effect me at the moment, accept minor non-OS dependant fix's that may filter through (scripts, etc..)?

56
Installation issues / Re: All orbiter text has disapeared 1004
« on: August 22, 2011, 03:34:02 pm »
I've had similar problem, turned out the appropriate fonts weren't installed, mayde check those have been installed? I think it was a package with ms-ttf in it..

57
Users / Re: All MDs are down
« on: August 20, 2011, 06:07:19 pm »
The failure to mount our file system relates to an entry in your PXE cfg's, example:
cat /tftpboot/pxelinux.cfg/01-00-19-d1-86-3d-93
...
APPEND initrd=139/initrd.img ramdisk=10240 rw root=/dev/nfs boot=nfs nfsroot=192.168.80.1:/usr/pluto/diskless/49
...

Which again is an nfs mount point, which again should bring you back to the post I did earlier.

58
Users / Re: All MDs are down
« on: August 20, 2011, 03:44:54 pm »
Check to make sure nfs daemon is running: /etc/init.d/nfs-kernel-server start - check the logs for any errors

I'd be checking '/etc/exports' for:
...
## BEGIN : DisklessMDRoots

/usr/pluto/diskless/46  192.168.80.0/255.255.255.0(rw,no_root_squash,no_all_squash,sync,no_subtree_check)
...
If you need to make changes try 'exportfs -a -v' before trying to boot your MD's

Check the file '/etc/fstab' on each orbiter: 'cat /usr/pluto/diskless/<MD#>/etc/fstab'
Look for something similar to the following:
...
192.168.80.1:/usr/pluto/diskless/46     /        nfs intr,nolock,udp,rsize=32768,wsize=32768,retrans=10,timeo=50 1 1
...

Tell us how you went?
Cheers

59
Feature requests & roadmap / Linuxmce Netboot Appliances
« on: August 19, 2011, 11:50:39 am »
Hi All,

LONG story short, needed to migrate 4x1TB soft raid to 4x2TB soft raid and I wasn't able to all 8 plus OS drive in the server. After looking for a few bootable live CD's, and NAS distro's to try to get the old RAID back up and running to transfer the data to the new raid. Finally had a 'DERRRR' moment and PXE booted a PC as an MD and set the old raid up to migrate the data back to the server. So, Just an idea, but maybe it would be a good idea to have options when creating a new MD; eg: What do you want this PC to be? NAS, Squeeze-Slave pc, etc.. and then just run the appropriate script and extract the base image to the created diskless dir.

60
Feature requests & roadmap / Re: Asterisk and the line identifier
« on: July 28, 2011, 01:56:33 am »
If you change your dial patterns in Freepbx.
Example: http://www.voip-info.org/wiki/view/Asterisk+Dialplan+Patterns
--------------------------
   _9011.          matches any string of at least five characters that starts with 9011,
                      but it does not match the four-character string 9011 itself.
--------------------------

Pages: 1 2 3 [4] 5