Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
Users / Re: apt-get upgrade problem
« Last post by l3mce on August 27, 2015, 09:54:18 pm »
or...

echo "Package: bash
Pin: version 4.1-2ubuntu3
Pin-Priority:1001" > /etc/apt/preferences.d/bash

and update/upgrade.
22
Developers / Re: Slow and steady ... moving from Trac to Gitlab
« Last post by posde on August 27, 2015, 08:21:56 am »
A note on CHT/Dianemo: They have long used their own repository, but do provide fixes back to us every now and then manually.
23
Users / Re: Need some dedicated testers.
« Last post by posde on August 27, 2015, 08:20:18 am »
The only auto-correct that is usable is the one from WP - everything else should be disabled ;)
24
Users / Re: Need some dedicated testers.
« Last post by maverick0815 on August 27, 2015, 06:45:03 am »
Yeah,  this is what happens, eben autocorrect on the tablet kicks in...I meant to explain, how they would be selected.
25
Developers / Re: Slow and steady ... moving from Trac to Gitlab
« Last post by mkbrown69 on August 27, 2015, 05:32:45 am »
A separate post to give an idea of my frames of reference. At $day_job, my role is now architecture and integration to support SysAdmins (which I used to be).  To do that, and to support a varied environment consisting of many flavours of Unix, Linux, Hypervisors and hardware architectures, we implemented a configuration and compliance management tool known as Puppet.  Puppet uses it's own structured language to describe system end-states, and maintain them as a configuration and compliance engine.  So, basically, you have infrastructure as code.

So, when you have code, and a need to maintain transparency and sustainable, reproducible infrastructure, you need a version control system. Originally, that was Git and Trac for us, and we made the switch to GitLab a few months ago for better productivity and for integration into other tools like Jenkins. Because, when you deal with almost every form of *nix out there, keeping your infrastructure tooling consistent across all platforms is challenging. You can't always ensure that the vendors will be shipping exactly what you need, and so you can be dealing with even more variety in your tooling as well as the stuff your're trying to maintain! It's enough to make you want to smash something!  So, sometimes you need to roll it yourself in order to ensure you have what you need, just the way you need it, when you need it. So, we use Jenkins for building packages to ensure our infrastructure tooling is in lockstep across all platforms.

So, basically I'm a SysAdmin using GitLab to maintain the tools my team needs to manage hundreds of systems. Our Git workflow represents a fairly linear model as a result.  We're just always making releases to increase capabilities or service catalog products, or just to deal with the fact that things change (like System V init to systemd). As such, we maintain a fairly simple workflow.

Master is always production-ready. Features get developed in their own branches. The develop brand is where integration happens, and dedicated "life-cycle systems management" instances are subscribed to that branch for dev/test and regression testing. Releases are tagged, and pulled into master.  Master gets pulled into the systems management tool, and applied to client test/dev systems. Upon acceptance from them, they get applied to validation systems. If those are good, it gets air-gapped across to client staging/pre-prod instances and tested against a back-flush of prod.  Assuming no regressions, it gets promoted to Production systems. Rinse, repeat for next cycle.

So, in a lot of ways, workflow gets determined by how much risk someone is willing to accept. In a high availability environment, very little risk is accepted; hence a rigorous process with many gates and go/no-go decision points. Tooling always has to support the process. More risk tolerance lends itself to more of a continuous-integration type workflow, where small changes are integrated often and distributed widely, and the tooling usually supports a quick reversion capability if things go badly.

Most of the opensource projects that get used in the enterprise space operate on a two-pronged approach. Nightly builds for those who develop or who like living on the edge. Stable releases are for those who have to depend on it.

So, do we want to model or mimic workflows of similar projects (some of whom are on GitHub)?  The fork, branch, pull-request model does work well for most of those projects. We just need to consider the other tooling (like the builders)  and the overall release process for getting it out into the wild. However, LinuxMCE has challenges that something like OpenHab or AgoControl won't have. LMCE's "auto-magic" functionality requires deep hooks into operating system components, and those are OS version specific. So our processes and workflows may need to consider and allow for supporting an 'n', 'n+1', and 'n-1' release strategy.  Branching may or may not reflect that strategy, depending on how other processes and tooling handle it.

We have community contributors, and there's also Dianemo/CHT. Do we treat them any different from the community?  GitLab allows for personal workspaces and team workspaces... GitLab also allows for branches to be "protected", limiting those who can commit to it. This is a good idea for production branches.

More food for thought. Opportunities to cast aside legacy, and the cost is to consider all the options, and possibly blaze a new trail.

/Mike
26
Developers / Re: Slow and steady ... moving from Trac to Gitlab
« Last post by mkbrown69 on August 27, 2015, 03:54:23 am »
Ok.... It could be a case of the blind leading the blind here... ;p

To develop workflows, we'll have to decide on how we're going to handle branches and tags.  I'd suggest folks take a read here: https://about.gitlab.com/2014/09/29/gitlab-flow/

The "Gitlab" flow may be what we want to consider.  Right now, we have everything dumped into "master". Maybe we want to consider master as the common code repository, and create branches for each version release? That does mean maintaining pulls across however many streams (10.04, 12.04, 14.04) we intend to maintain. Tags can be used for release candidates or versions.

Or we go with master as production ready, and put conditionals into the code for every stream supported for the builders. Develop and feature branches are where all the work gets done. Tags can still be used to mark versions, milestones, RCs, etc.  Releases get pulled over to master for the builders to crank out release packages. Develop could get nightlies. 

I think a lot of this now comes down to release planning. Are we running a continuous integration type release process, or a waterfall staged type process?

Just some points to consider...  HTH!

/Mike
27
Developers / Re: PHP to execute sudo command
« Last post by phenigma on August 27, 2015, 03:14:01 am »
Govo this *REALLY* awesome stuff!  The current proxy/filter menu items are hidden if the lmce packages are not installed.  Alblasco has been working on the firewall stuff and has some things prepared but I'm not sure how much.  It'd be really great if you could get together with him in IRC and work out any remaining issues :)

I know next to nothing about iptables but let me know if I can help in the install/packaging department to bring this to everyone! 

Thanks for working on LinuxMCE btw!!

J.
28
Developers / Re: PHP to execute sudo command
« Last post by Govo on August 27, 2015, 01:30:18 am »
Hi Posde & Phenigma

Thanks for the replies.

The problem wasn't the PHP script, you had to allow access in the visudo  ( Posde thanks for pointing me in right the direction ), according to a topic on the the internet the permission is


# Cmnd alias specification
Cmnd_Alias DANSGUARDIAN = /etc/init.d/dansguardian, /usr/sbin/dansguardian

# User privilege specification
root ALL=(ALL) ALL
www-data ALL=NOPASSWD: DANSGUARDIAN
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
%www-data ALL=NOPASSWD: DANSGUARDIAN

This allowed PHP to executed the command to restart DansGuardian.

You can see it working in this quick video, to get around the error of no response received from the server after submitting the command,  I put in a 10 second delay, and  then a finish button to bring you back home.

https://www.dropbox.com/s/mv2m300lhf0rjf8/videoclip2%20DansGuardian%20restart.mp4?dl=0


In order for the PHP scripts to work,  visudo has to be edited with the above code, writes access given to the dansguardian LISTS folder.

I have written a  PHP script to check if the database exists, if not, create the database and import the tables.

Here's a short video of it creating the database and then importing the sql file

https://www.dropbox.com/s/vh83kyopg86sxkl/databaseimportandcreate.mp4?dl=0

I am working on this as a plugin, it has its own folder inside  the /var/www/lmce-admin, this allows me to move it to any version of LMCE, the only cache is, DansGuardian and squid3 have to be installed first and some editing done to the IP-tables.


On that note, thanks for the replies I will keep you posted!


Gov.
29
Users / Re: Need some dedicated testers.
« Last post by golgoj4 on August 27, 2015, 01:13:01 am »
They are tagged correctly. I remember tschak saying, They Wohle eventuelle Benachrichtigungen selectable by show, season, episode....something like that.

Dunno what that means, but I will try to make the filters clearer..

30
Developers / Re: PHP to execute sudo command
« Last post by phenigma on August 26, 2015, 07:57:26 pm »
fyi all, we're in the AgoControl section here.  But just wanted to add that in LinuxMCE apache will permit script execution from the /usr/pluto/bin directory.  Put your scripts there and they will be executable from php under apache.

J.
Pages: 1 2 [3] 4 5 ... 10