[Klug-general] Ansible

Kevin Groves kgroves at cix.co.uk
Thu Feb 5 20:36:43 UTC 2015


First thing I've had a peak at one I did at work and in the hosts.cfg 
file I have this which might help to set the ssh user to connect with:

192.168.xxx.xxx ansible_ssh_user=user1


On 05/02/15 10:14, Dan Attwood wrote:
> right this ansible lark is doing my nut in - simple automate my bottom!
>

:-) Not quite sure. What we could do is setup a couple of VMs or 
something and do a clean setup that you can then try together to make 
sure it works. If you want to do this then let me know off list (should 
you have issues with security etc).



> I'm now getting:
>
>
> failed: [10.0.100.56] => {"failed": true}
> msg: Failed to lock apt for exclusive operation
>

Sounds like apt is not able to gain root lock due to running as a non su 
user.


> i've running the playbook with
>
>
> sudo  ansible-playbook ansible/upgrade-server.yml -vvvv -s -kK 
> --sudo-user administrator
>
> so i'm specifying use sudo and have the user as administrator
>
>
> In the server I'm connecting to in my sudeors file i've got
>
> administrator ALL=(ALL) NOPASSWD:ALL
> %sudo   ALL=NOPASSWD: ALL
>
Yep that would do it.

>
> So ignoring how insecure that if for a moment I can run sudo apt-get 
> whatever on the remote server without having to input a password - 
> i've confirmed this is the case
>
> My playbook looks like
>
>
> - hosts: servers
>   gather_facts: no
>   sudo: yes
>   sudo_user: administrator
>   tasks:
>    - name: updates a server
>      apt: update_cache=yes
>    - name: upgrade a server
>      action: apt upgrade=dist
>
>
> any thoughts kevin? anyone?
>
>


Right there is also the other joy that I have discovered is that running 
sudo via ssh as it has no tty will bomb out. It's a security 'feature' 
caught me a few times. It should not be a problem if all is used 
correctly with ansible but if you are doing anything unusual then it 
could be the problem. I suggest a look through the sshd_config file to 
ensure remote ssh commands are not being broken. Silly question is 
SELinux enabled? That can introduce what looks like really bizarre 
errors because it is stopping things.

If that playbook is what you are aiming for then I will set up a couple 
of vagrant vms and set it up. I can then throw over the boxes or configs 
for you. Will be a perhaps Sunday as I need to prep for going to London 
on Saturday.

Kev



>
>
>
>
>
>
>
>
>
> On 3 February 2015 at 19:35, Kevin Groves <kgroves at cix.co.uk 
> <mailto:kgroves at cix.co.uk>> wrote:
>
>     So seeing things like:
>
>     "10.0.100.37" from file "/root/.ssh/known_hosts
>
>     100.37 isn't listed in your ansible hosts file so is that the
>     machine you are running FROM?
>
>     Seems odd that known_hosts is a problem as that is for incoming
>     connections????
>
>     I just looked at some of mine but I tend towards using root ssh
>     keys. I suggest stripping it back to a really simple task with
>     root keys for example. I think there is a switch to actually
>     prompt for passwords instead of use keys so this could be worth a
>     try to see what user(s) are really being used. Even worth
>     switching on sshd server debug too to see what end is doing what.
>
>     Kev
>
>
>
>
>     On 03/02/15 19:03, Dan Attwood wrote:
>>     logged in as administrator and sudo
>>
>>     so yes
>>
>>     I also tried it with one machine and adding administrator to the
>>     sudoers files with passwd:all. but no dice
>>
>>     On 3 February 2015 at 18:59, Kevin Groves <kgroves at cix.co.uk
>>     <mailto:kgroves at cix.co.uk>> wrote:
>>
>>         On 03/02/15 12:30, Dan Attwood wrote:
>>>          ssh-copy-id administrator at 10.0.100.93
>>>         <mailto:administrator at 10.0.100.93>
>>>
>>>
>>
>>         OK and you did that logged in as administrator? and when I
>>         mean logged in as administrator you did login and not did  su
>>         administator?
>>
>>         Kev
>>
>>
>>
>>>         manage to hit send to soon
>>>
>>>         On 3 February 2015 at 12:29, Dan Attwood
>>>         <danattwood at gmail.com <mailto:danattwood at gmail.com>> wrote:
>>>
>>>             my host files looks like this:
>>>
>>>             [all:vars]
>>>             ansible_sudo_pass=secretpassword
>>>
>>>             [servers]
>>>             10.0.100.56
>>>             10.0.100.72
>>>             10.0.100.93
>>>             10.0.100.38
>>>
>>>
>>>             my playbook is:
>>>
>>>             - hosts: servers
>>>               gather_facts: no
>>>               user: administrator
>>>               remote_user: administrator
>>>               sudo: yes
>>>               tasks:
>>>                - name: updates a server
>>>                  apt: update_cache=yes
>>>                - name: upgrade a server
>>>                  apt: upgrade=dist
>>>
>>>
>>>             So it thought I was pretty clear to ansible that the
>>>             user is 'administrator'
>>>
>>>
>>>             when i copied the keys over i did:
>>>
>>>
>>>
>>>             On 3 February 2015 at 12:26, Kevin Groves
>>>             <kgroves at ksoft-creative-projects.co.uk
>>>             <mailto:kgroves at ksoft-creative-projects.co.uk>> wrote:
>>>
>>>
>>>                 On 03/02/15 09:00, Dan Attwood wrote:
>>>
>>>                     ok i've done that and that speed things up a bit.
>>>                     unfortunately it speeds it towards the next
>>>                     fail. witht he debug on I can the errors lists
>>>                     below.
>>>                     I've double checked that I can ssh into the
>>>                     servers via kay and I'm following the note I
>>>                     made when I had this working at home so and dan :-(
>>>
>>>                     error below
>>>
>>>
>>>                     fatal: [10.0.100.37] => SSH encountered an
>>>                     unknown error. The output was:
>>>                     OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
>>>                     debug1: Reading configuration data
>>>                     /etc/ssh/ssh_config
>>>                     debug1: /etc/ssh/ssh_config line 19: Applying
>>>                     options for *
>>>                     debug1: auto-mux: Trying existing master
>>>
>>>
>>>
>>>                     debug1: Control socket
>>>                     "/home/administrator/.ansible/cp/ansible-ssh-10.0.100.37-22-administrator"
>>>                     does not exist
>>>
>>>
>>>                 Is this home dir connected with an 'administrator'
>>>                 user? It could be that ansible is using the wrong
>>>                 user key to connect with what looks like 'root' on
>>>                 the other machine.
>>>
>>>                 Hopefully its just a matter of which user is being
>>>                 used on which side.
>>>
>>>                 You might also want to take a look at the ansible
>>>                 config file. Mine is in /etc/ansible/ansible.cfg
>>>                 which has lines like:
>>>
>>>                 poll_interval  = 15
>>>                 sudo_user      = root
>>>                 #ask_sudo_pass = True
>>>                 #ask_pass      = True
>>>                 transport      = smart
>>>                 remote_port    = 22
>>>
>>>                 I think you can be specific about what users are
>>>                 used instead of assuming it knows what you really
>>>                 mean. :-)
>>>
>>>                 Kev
>>>
>>
>
>
>     _______________________________________________
>     Kent mailing list
>     Kent at mailman.lug.org.uk <mailto:Kent at mailman.lug.org.uk>
>     https://mailman.lug.org.uk/mailman/listinfo/kent
>
>
>
>
> _______________________________________________
> Kent mailing list
> Kent at mailman.lug.org.uk
> https://mailman.lug.org.uk/mailman/listinfo/kent

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.lug.org.uk/pipermail/kent/attachments/20150205/6e77a9a5/attachment-0001.html>


More information about the Kent mailing list