sobota, 25 lutego 2017

Docker compose to attach to an existing container

axsuul
is there a way for docker-compose to attach to an existing container?
12:17 am max3
i'm confused: how much space does one get on a public docker repo?
12:18 am
i'm confused: how much space does one get on a public docker repo?
12:18 am
whoops
12:23 am sathed_
Wonder if someone can help me with pushing a docker image to AWS... I'm kind of new to Docker, so I apologize... I've already authenticated docker to aws (aws ecr get-login). I have an image that I have successfully built from a Dockerfile. After it's built locally, I run `docker tag <image_id> aws_account_id.dkr.region.amazonaws.com/my-app`. Then, I run docker `docker push aws_account_id...amazonaws.com/my-app`. It lists out the ima
12:24 am
And I can see the new aws image after running `aws tag ...`
12:24 am
And maybe I'm in the wrong channel too... :P
12:52 am
nZane is now known as nZane_away
1:06 am wrkrcoop
i keep getting this error: There is insufficient memory for the Java Runtime Environment to continue.
1:06 am
how can i increase the space? im using docker-machine
1:07 am ada
ash_workz: last I heard the current version of docker-compose did not yet support v3 format
1:09 am f0xtr0t-qwerty-k
does anyone else have any issues restarting services in the centos 7 container?
1:10 am ada
ash_workz: do you have the latest docker-compose?
1:12 am wrkrcoop
im editing the docker-machine default to give it more memory, do i want to increase base memory?
1:19 am axsuul
anyone using byebug + docker-compose? After exiting a byebug session, it seems to kill the container completely. How do I keep the continue running after I exit?
1:21 am
nevermind
2:20 am
k23 is now known as k23_
2:27 am LiENUS
can i put a file into a docker secret? say a ssl cert?
2:47 am ash_workz
ada: yeah, my problem was that I had compose version 7 I think and we're up to 10 now
2:47 am
now I am trying to figure out why when I link a directory in compose and touch a file in there from a container, the file doesn't show up on the local system
2:47 am
might be because I'm using docker machine
2:48 am
yeah, that's got to be it
4:00 am NullEntity
does docker have default container memory limits?
4:04 am Arcaire
No.
4:04 am
https://docs.docker.com/engine/admin/resource_c...
4:05 am ash_workz
why does my docker-machine have everything in my home dir in /hostname ?
4:05 am NullEntity
Arcaire, lol I think I'm just out of guest vm ram
4:05 am ash_workz
erm
4:05 am
/hosthome
4:06 am
that's gotta be a binding right?
4:06 am
I mean, it hasn't copied all my home files into the virtual-machine, right?
4:08 am
yup
4:10 am
do if I do something like `eval $(docker-machine env foo); docker-compose up` and I have inside compose: `volumes:``- ./ash_workz:/ash_workz` where is that volume going to link to?
4:18 am
woo. Looks like `volumes:``- /hosthome/absolute/path/to/ash_workz:/ash_workz` works
4:18 am
but is that what you're supposed to do?
4:42 am
hmm, so a docker volume will not show up on the remote machine despite eval $(docker-machine ...)
4:46 am SmashingX
how can I attach named volumes to docker?
4:46 am
how can I attach volumes to already started containers ?
4:46 am
if I didn’t run the container with -it flags how can I reattach to the container with a TTY?
5:01 am saintdev
Is there any documented way to migrate from overlay to overlay2?
5:19 am ttyonk
how can i run docker in docker with device mapper? below , failed to run docker.
5:19 am
# docker run --rm --privileged docker:1.13.1-dind -s='devicemapper'
5:20 am
time="2017-02-13T04:16:53.083437585Z" level=error msg="devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a dynamic binary to use devicemapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/comman...; Error starting daemon: error initializing graphdriver: driver not supported
5:30 am lrvick
So looking for simple ways to be able to have persistent data containers in an ec2 instance. Can I simply mount EBS to /var/lib/docker/volumes
5:30 am
and then transplant that folder to a fresh instance and have docker be able to use those same volumes by name?
5:51 am
Amgine_ is now known as Amgine
6:41 am ash_workz
so I really don't get what happens here
6:42 am
I set up a docker-machine and threw on an ubuntu container running s6 overlay
6:44 am
I have in a compose file ./san:/san
6:44 am
for volumes
6:45 am
which worked effectively despite me at first not really knowing where that attaches with a vm
6:45 am
anyway, I installed postgres and created a DB, a TABLESPACE, a TABLE in that tablespace;
6:47 am
I figured with a linked volume being the location for the db would persist the data
6:47 am
but removing the container killed the data
6:48 am
not sure exactly why, but when I checked san, guess what? the data is still there .... hmmm
8:56 am bless
hi, is there anyone to help me? i'm in trouble to connect container via internal ip (172.17.0.2)
8:58 am bobazY
bless: just ask your question
8:59 am bless
okay, i created an webserver using docker ubuntu 16.04 image. but i can't connect http://172.17.0.2
9:00 am
i want to connect to container without port-forwarding
9:00 am widgetpl
use --net=host
9:00 am
or sth like: `docker inspect "$1" | jq -M -c -r '.[].NetworkSettings.Networks[].IPAddress'`
9:02 am bless
"IPAddress": "172.17.0.2" displayed in `sudo docker inspect mycontainer`
9:03 am
is it possible using --net=host option when `docker start mycontainer` ?
9:04 am widgetpl
i think you have to use it during `docker run`
9:05 am
don't know if you are able to switch networks during running
9:05 am bless
hmm.. then i'll try to `docker run ` with --net=host option
9:05 am
thanks
9:05 am widgetpl
np
9:06 am eirikb
How can I add two IP addresses internally in my container? The container exists only for testing purposes, and will have one service running on 10.3.1.1 and another on 10.3.1.111, which will internally communicate with each other. Version 1.13.1
9:07 am bless
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"--net=host\": executable file not found in $PATH".
9:09 am widgetpl
bless: are you sure you are using correct command ?
9:09 am
like `docker run -d --net=host <image_name>`
9:11 am bless
ah, i mistyped command
9:14 am
widgetpl: my container's network is 'host', but IPAddress is empty ("IPAddress": "",)
9:15 am eirikb
Seems if I use --cap-add=NET_ADMIN the container can add addresses with ip addr add
9:17 am widgetpl
check if you webserver has binded on you host IP
9:17 am
`netstat -nap | grep <port_number>`
9:17 am czart__
eirikb: yeah, you can, but the static IP will be lost after container restart.
9:17 am
czart__ is now known as czart
9:18 am bless
i tried with openssh-server to simplify test.
9:18 am eirikb
czart: I only need the ip for internal communication, so it doesn't matter much. My test-script needs to add the IPs though. Do you have an alternative approach? I tried "docker network" + "--ip", but could only manage to set one
9:19 am bless
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3619/sshd << shown to me
9:19 am
i typed `netstat -nap | grep 22`
9:20 am widgetpl
are you using sshserver on you machine ?
9:20 am
if yes it wont be possible to use 22 port
9:20 am
try with different port
9:20 am
insisde container
9:20 am
like 2222
9:21 am bless
ah, host option is sharing my host's network?
9:23 am
i really want docker web server with my host's webserver, so i want to connect container's web server via http://172.17.0.2, and host's web server via http://myouterip
9:23 am
without port-forwarding means
9:26 am ada
why not use port forwarding?
9:29 am bless
because i want to make multiple web development environment, then too many ssh ports (22, 221, 222, ...), too many web ports (80, 8080, 81, ...), too many mysql ports... i don't want it
9:29 am ada
mm seems flimsy reason to bypass docker's network security
9:30 am czart
eirikb: what about multiple IPs on a single nic interface? http://www.tecmint.com/create-multiple-ip-addre...
9:30 am
eirikb: you need to attach to a container, by: docker exec -it <cont-ID> /bin/sh, and set up things manually.
9:30 am bless
ah, it's network security?
9:31 am eirikb
czart: That is what I do when I'm inside the container now, using ip addr add, this is what I was able to do when using --cap-add=NET_ADMIN
9:31 am bless
then... do i have to use the port-forwarding instead?
9:32 am ada
bless: you should use port forwarding if at all possible as a best practice. --net=host bypasses the network security model
9:33 am bless
okay. thank you ada widgetpl !
9:33 am czart
eirikb: maybe this: http://stackoverflow.com/questions/34110416/doc... ?
9:36 am eirikb
czart: Haven't seen that one. But I think I saw the approach, it requires me to have the container already running, before connecting the networks right? And the other solution is to use --net=host, which I'm not sure if I want to do
9:39 am czart
eirikb: I haven't tested it yet, so you would have perform some testing on your own to answer your doubts. Generally, you need to create an interface, or use multiple IPs on single nic. Do not expect that everything can be done with single command in docker... There are many workarounds and sometimes "ugly" hacks to achieve things.
9:40 am bless
i have one more question. why service not start automatically in container?
9:41 am eirikb
The current "hack", to use -cap-add=NET_ADMIN seems to suite me just fine. I already have a script for starting and testing the services, so for the script to also add the IPs every time is fine
9:41 am bless
should i type `service nginx start` everytime start docker?
9:45 am czart
eirikb: http://pastebin.com/qsg45kAL
9:46 am eirikb
czart: Yes, it has to be created beforehand, it wouldn't work with say "docker run --rm" on an image?
9:47 am czart
eirikb: let's check.
9:49 am nschoe
Hi all
9:50 am czart
eirikb: http://pastebin.com/ga0g2kUJ Seems working.
9:51 am
eirikb: docker network connect <net-name> <con-ID> adds an interface to a container with <con-ID>
9:51 am eirikb
czart: Yeah I guess you could connect the networks after starting the container. My point is that I would always have to connect both networks when running my tests - given I start from --rm
9:54 am ash_workz
I really need to figure out how docker volumes work in compose
9:56 am czart
eirikb: http://pastebin.com/Ju2dtdSw check this? Do you mean restarting a container? --rm is for removing volumes as far as I remember.
9:57 am ada
ash_workz: specify a volumes: key and the paths you want to mount as values
9:57 am eirikb
czart: I mean removing the container
9:57 am ash_workz
ada: so, a little background, I'm working with docker-machine
9:57 am ada
bless: you shouldn't have to, but "service nginx" doesn't have anything to do with docker
9:57 am ash_workz
ada: I mean, I'm on linux, but I need to use docker-machine to simulate for db reproduction
9:57 am ada
docker-machine or no, docker-compose works the same way
9:58 am ash_workz
ada: anyway, my point is that I have defined in my compose file...
10:00 am
ada: https://gist.github.com/ash-m/53944c68753a428f9...
10:00 am
ada: I keep loosing my db data if I destroy the container
10:00 am ada
in your volumes: key, if you don't use an actual path the docker daemon will use "named volumes" to store your data. if you want to mount a dir, use a path containing a slash
10:00 am bless
ada: but when i start my container, services(nginx, mysql, php-fpm, ssh) are not started automatically
10:00 am ada
/path/to/san:/san
10:00 am ash_workz
ada: isn't `san` and `fiodata` in that context just like doing `docker volume create san`
10:01 am ada
in that instance, yes
10:01 am ash_workz
ada: so, shouldn't it keep the data after I destroy the containers?
10:01 am ada
how are you stopping your containers?
10:01 am ash_workz
ada: maybe it's cuz I used docker-compose down.
10:01 am bless
i stopped 'exit' in my container
10:02 am ash_workz
ada: and docker rm -fv $(docker ps -aq)
10:02 am ada
bless: each container should be responsible for 1 concern
10:02 am
ash_workz: docker rm -v will remove volumes
10:02 am ash_workz
ada: but that shouldn't effect things made with `docker volume` right?
10:03 am
ada: I thought `docker rm -v` only removed unnamed volumes
10:03 am czart
eirikb: so, what's the problem of connecting to networks after a container was being run? That is, first 1) docker run --rm ... 2) docker network connect <net-name> <con-ID> .
10:03 am ash_workz
ada: (ie: not volumes made using `docker volume`)
10:03 am ada
ash_workz: https://docs.docker.com/engine/reference/comman...
10:04 am
it removes all associated volumes
10:04 am ash_workz
ada: that doesn't really clarify
10:04 am
I thought anything made using `docker volume` could only be destroyed using `docker volume rm`
10:04 am ada
no
10:04 am ash_workz
interesting
10:04 am bless
ada: i can't understand what it mean.. `each container should be responsible for 1 concern`
10:05 am eirikb
czart: Nothing wrong about that, just that a single command becomes three commands. That is fine as well, but what is so bad about using --cap-add=NET_ADMIN? Then I can use one command. Sure I have to add the IPs, which are additional commands, but I already have a script on the container-side, while I don't have that on the host-side
10:05 am ash_workz
ada: alright well, I guess I'll need to be more careful then
10:05 am ada
bless: if you need multple daemons running at the same time, they should be handled by separate containers and linked together with a docker network
10:06 am ash_workz
ada: I thought that the -v flag was basically there to free up room when you've killed containers that had mapped volumes to some hash
10:06 am pLaTo0n
moin
10:06 am ash_workz
ada: I guess I know better now
10:06 am
ada: though the docs could maybe clarify a little bit
10:06 am
imo
10:06 am
ada: thanks though
10:07 am ada
sure
10:07 am bless
even though the daemons installed in one container?
10:07 am czart
eirikb: the only thing about using --cap-add is security issues, but nothing else.
10:07 am
are security issues*
10:07 am ash_workz
ada: (vocabulary), volumes created with `docker volume create` are "named volumes" ?
10:08 am ada
bless: if you MUST install them all in the same container, you need to use some other daemon to manage your daemons. like supervisord for example
10:08 am
ash_workz: yes
10:08 am Muchoz
Are there any examples that use the Docker Go client to connect to remote hosts? I'm wondering how Docker authenticates the client but all I can find is docker registry authentication. See this: https://godoc.org/github.com/docker/engine-api/...
10:08 am eirikb
czart: I don't think that would be a problem here. I didn't see the IPs I added on the host, so it doesn't seem to bother the host environment
10:08 am ash_workz
ada: (vocabulary) volumes linked in like -v /path/to/host:/dir are... ?
10:08 am czart
eirikb: OK, so then there is probably nothing to worry.
10:09 am ada
ash_workz: mounted volumes
10:09 am eirikb
czart: Ok. Thanks
10:09 am ada
ash_workz: or host volume
10:09 am ash_workz
ada: (vocabulary) and finally `-v /unnamed/volume` is an "unnamed volume" ?
10:09 am ada
theres no such thing as an unnamed volume
10:10 am
if you don't supply a path in the container it's mounted in the same path iirc
10:10 am ash_workz
ada: I thought you could just do `-v /path/to/volume` in docker run
10:10 am ada
yeah it just mounts to the same path in the container as the one you supply
10:13 am ash_workz
ada: so `-v /path/to/volume` is the same as `-v ./path/to/volume:/path/to/volume` ?
10:13 am ada
ash_workz: yeah
10:13 am
well, except the dot at the beginning
10:14 am ash_workz
ada: didn't it used to be that `-v /path/to/volume` would be like `-v /path/to/docker/{really long hash}/{another really long hash}:/path/to/volume` ?
10:14 am ada
I can't answer that
10:14 am ash_workz
ada: I distinctly remember something like that
10:14 am ada
docker volumes in your host's filesystem will look like they're in some crazy location if you use docker inspect
10:15 am
the "internal" volumes that are hash-based names
10:15 am czart
eirikb: you are welcome. Still, there is a concern about security when you use --cap-add, you can drop it with --cap-drop I believe. When, --cap-add for NET_ADMIN is set it means that you can configure network from inside a container, if it pose threat to your env. assess on your own. Cheers.
10:15 am ash_workz
I see, so, that's kind of docker's space for mounting said files.
10:19 am
ada: oh! I know maybe what I'm thinking of... if you do something like `-v /path/doesnt/exist:/path` then docker creates that path locally; I think `rm -v` will get rid of that directory right?
10:20 am
ada: but it, of course, wont get rid of any mounted volumes like `-v /path/to/volume:/path` right? (since those files are not part of any container.
10:21 am ada
no, -v will never remove files in a mounted volume
10:21 am ash_workz
wow, I just can't believe how backwards my understanding was
10:21 am ada
or directories
10:21 am ash_workz
oh well
10:22 am nschoe
Hi back
10:22 am
Hi ash_workz :)
10:23 am ash_workz
hi nschoe
10:23 am
:)
10:23 am nschoe
's up?
10:23 am ash_workz
pullin' an all nighter x.x
10:24 am ash_workz
I wonder if there's a more effective method for rebuilding your images rather than deleting dangling images
10:25 am ada
ash_workz: you can force rebuild with docker and docker-compose
10:25 am ash_workz
ada: yeah, but it leaves untagged images
10:26 am ada
do you assign a tag when you build?
10:26 am ash_workz
ada: yeah
10:27 am
so I added these to my bashrc https://gist.github.com/ash-m/3fa32d793ca115c5c...
10:28 am nschoe
ash_workz, you can use the --no-cache option to force rebuild indeed. And yes it leaves those nasty <none> images. Persoanlly I have a cronjob that periodically prompts me to clean the dangling images.
10:29 am ak77
Hello, I can't find any examples of docker-compose to bind container to existing bridge on host, can anyone give a hint or two?
10:29 am ash_workz
it'd be nice if a `docker rebuild` would just delete the ancestor
10:29 am
I guess
10:30 am
nschoe: what did you think of those functions?
10:30 am nschoe
ash_workz, what functions?
10:30 am ash_workz
nschoe: my gist ^
10:31 am Bejjan
o7
10:32 am pluszak
I'm trying to push an image to a self-hosted gitlab docker registry and after pushing a few layers successfully I get "Authorization required error". Is it a common thing for the auth to timeout during the upload?
10:32 am
It didn't do that a few weeks ago
10:32 am nschoe
Hi Bejjan ^^
10:34 am ada
ak77: https://docs.docker.com/compose/networking/#/us... and use docker0 as the external network name
10:36 am ash_workz
nschoe: so what're you up to?
10:37 am apaze
Hi everyone! I'm wondering how would one secure inter-containers communication. Is creating a network overlay, like Swarm proposes but just for a single host, with the `--opt encrypted` option?
10:38 am
The documentation explains that it encrypts the network between the Swarm node, so no use on a single host I guess. Then... How to secure inter-containers communication? Is there a way to encrypt it?
10:38 am nschoe
ash_workz, working on Part V ^^
10:38 am ash_workz
neat
10:39 am nschoe
ash_workz, nice functions in your gist. I really don't like the eval part though.
10:39 am
It's so easy to completely mess your system because you blindly eval user-supplied command. Otherwise, it's nice.
10:39 am
I prefer to have a cron that periodically checks / removes dangling images, though.
10:40 am ash_workz
that seems to be the standard; I've seen that suggestion on more than one forum ^ ^;
10:41 am
nschoe: but yeah, the eval is just because that's what the docs say
10:41 am
oh
10:41 am
no, not that other one
10:41 am nschoe
ash_workz, what do you mean that's what the doc says
10:42 am
I don't think any self-respecting documentation told you to blindly "eval" user-input command xD
10:42 am ash_workz
nschoe: I thought you were talking about the eval $(docker-machine env ... )
10:42 am nschoe
ash_workz, ho no, that's fine ^^
10:42 am
ash_workz, I'm talking about the other `eval`.
10:42 am ash_workz
yeah, any recommendations on that?
10:43 am nschoe
eval $(docker-machine env ... ) doesn't eval user-input string, it evals what `docker-machine` outputs. And we have to trust it (trusts begins somewhere).
10:43 am ash_workz
nschoe: I mean, I basically wanted to just pass-thru to docker build, but also trigger a cleanup
10:43 am nschoe
ash_workz, yes, make a proper parsing of the arguments.
10:43 am
ash_workz, understand me: the script you have is fine in most senses. It will do the job, and if you're the only one using it. You're not likely to have problems. It'
10:43 am ash_workz
bwah. :( docker already did that. >.<
10:44 am Muchoz
What is a valid docker host? https://godoc.org/github.com/docker/engine-api/...
10:44 am nschoe
s just that, I'm paranoiac, and it's a matter of habit for me to think about things securely (says the guy which still haven't switched his website to https xD ^^)
10:44 am
ash_workz, what do you mean?
10:44 am Muchoz
The docs really give you the middle finger for when you want to use it.
10:45 am
Is there even any useful documentation for the client?
10:46 am ash_workz
nschoe: parsing the litany of argument combinations for docker build I feel like is out of the scope for a .bashrc function :P
10:46 am Bejjan
nschoe: hey :-) (sorry got distracted by work) ^^
10:47 am nschoe
ash_workz, ah but it's because you're not thinking evil enough. Docker will parse the argumetn for the docker build
10:47 am
But what if I called your script with this string:
10:47 am
ash_workz, --help >> /dev/null; echo 'hello'
10:47 am
ash_workz, this will translate to: `docker build --help >> /dev/null; echo 'hello'` Which calls simply --hepl to `docker build`, then redirectos outputs to /dev/null. And THEN execute WHATEVER I put behing the semicolon
10:48 am
ash_workz, now replace `echo 'hello'` by `rm -f ./` and you see the problem.
10:48 am
This is a _very classic_ injection.
10:52 am ash_workz
😗
10:53 am
nschoe: (out of curiosity, how do you interpret that?)
10:54 am sieve
Hiya, there is some kind of mechanism on this host that is scaling a docker container, sometimes I see two containers, sometimes 4
10:54 am
Can someone tell me how I can discover the mechanism doign this?
10:55 am
I notice that the containers are not surviving very long
10:55 am telling
Look in your list of running processes? Look in cronjobs.
10:55 am nschoe
ash_workz, what do you mena how to I interpret that? This is a classic cmd-line argument
10:56 am
Bejjan, so how is that cluster holding?
10:57 am ash_workz
nschoe: heh, I meant how do you interpret 😗; I am always afraid to use it because I feel like it'll be misinterpreted, but it conveys my sentiment accurately
10:57 am ada
i see a face emoji
10:58 am
or perhaps its a US electrical outlet
10:58 am nschoe
ash_workz, well this is a weird ASCII symbol. I can't really make the details out, so I don't know what this is supposed to mena :/
10:58 am
ada, ah ah, I will think of this as the outlet indeed^^
10:59 am ash_workz
ah, well, I guess it could be called a 'duck face' but it could easily be interpreted as a 'kissy face'... the sentiment is like this though: https://s-media-cache-ak0.pinimg.com/736x/b0/c5...
10:59 am
nschoe: ^
11:00 am nschoe
ash_workz, it... doesn't really help me xD
11:00 am
I still can't make the emotion.
11:01 am ash_workz
woot; fail
11:01 am nschoe
Anyway, regarding injection I've just demonstrated to you why people shoul never use eval like this. But as I said, if you're the only one using that script, go for it.
11:14 am czart
Have you created a bridge interface on VM that runs based on boot2docker image? 'brctl' command is lacking on the distro. Is there any other way to create a bridge (the same bridge as 'docker0') with different set of commands or approach?
11:14 am pluszak
Okay, I've found that the auth token was set to 5 minutes
11:25 am czart
Find it out: ip link add br0 type bridge ...
11:26 am ak77
ada: I have bridge, created with brctl on the host, how does this external network binds to it?
11:51 am Bejjan
nschoe: cluster is holding nicely, got a windows hyper-v 2016 server joined in aswell
11:52 am nschoe
Bejjan, nice. Has it started farming?
11:53 am Bejjan
nschoe: not yet, but some wallet services are running, luckily im in no rush since it's more of a hobby :-)
11:54 am nschoe
Okay, good.
11:55 am Bejjan
spent most of my weekend drinking whisky and playing games, was a nice break from all the tech stuff
12:08 pm sgfgdf
hi, guys! is it possible to group logs from a docker-composer service? for example see only error messages from nginx without access logs.
12:10 pm mattmcc
sgfgdf: Wouldn't that be up to your nginx config?
12:12 pm sgfgdf
mattmcc: yes, but at the moment everything (error_log and access_log) is sent to /dev/stdout
12:13 pm mattmcc
So change your nginx config. Send access logs wherever (/dev/null if you don't care about them)
12:14 pm sgfgdf
mattmcc: my idea is to filter logs only if you want and have the option to see all of them if needed.
12:15 pm
mattmcc: at the moment i see logs like this `docker-compose logs -f` and i was wondering if there is some type of filtering.
12:16 pm mattmcc
No, that command just tracks standard output.
12:17 pm
It sounds like you want to either divert the nginx access log somewhere else, or ignore it completely.
12:19 pm proti
morning
12:25 pm sgfgdf
mattmcc: is it a good practice to send the access logs for example to a log file somewhere and direct the errors to stdin so they can be easily catch if docker logs is used?
12:26 pm mattmcc
That probably depends on what your objectives are.
12:28 pm sgfgdf
mattmcc: well normal web development. probably developers most of the time want to know about an error instead of being flooded with access logs.
12:30 pm mattmcc
Yeah, so like I said, I'd probably either send access logs somewhere else, or just drop them.
12:34 pm
mgoldmann|away is now known as mgoldmann
12:39 pm Ronis_BR
guys, if I have two services that use postgresql, should I have just one container for postgres?
12:53 pm ak77
is there a way to use hosts bridge network interface (created with brctl) from docker-compose ?
12:55 pm ada
ak77: what's the name of the interface?
12:55 pm ak77
ada: e.g. scm-br0
12:56 pm ada
just use that name as the name of the network
12:56 pm
tell docker-compose to use an external network, using the name of the interface you have
12:56 pm ak77
ada: "ERROR: Network scm-br0 declared as external, but could not be found. Please create the network manually using `docker network create scm-br0` and try again."
12:58 pm
ada: docker --version = Docker version 1.13.1, build 092cba3
12:59 pm steerio
hi all! trying to use shared mounts from Docker to no avail ("is mounted on / but is not a shared mount"). The canonical answer here is to use MountFlags=shared in Docker's systemd service file, but this box is using System V init scripts. Any pointers?
1:02 pm ada
ak77: you may need to create the network with 'docker create'
1:02 pm
ak77: what are you trying to accomplish
1:03 pm ak77
ada: have a container bind to specific network interface
1:03 pm ada
for why
1:04 pm ak77
ada: a service was moved, dockerized from a machine with specific IP. I want to preserve the IP.
1:06 pm ada
maybe something like this http://sttts.github.io/docker/network/2015/01/3...
1:07 pm
i can't find a specific way to tell docker to bind to an interface that wasn't created with 'docker network' without changing the default bridge
1:09 pm ak77
ada: not daemon wide, just for single container, something like this https://github.com/docker/docker/pull/6704
1:10 pm
ada: there used to be --bridge command which I think is something I need.
1:10 pm sgfgdf
mattmcc: thank you very much for the help!
1:11 pm ada
ak77: can you not fix the issue in DNS?
1:12 pm ak77
ada: no, service is accessed by IP
1:14 pm
NickG365_ is now known as NickG365
1:41 pm Armays
what means docker run -d --name gitlab-runner --restart always \
1:41 pm
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
1:41 pm
gitlab/gitlab-runner:latest ?
1:42 pm Zerberus
Armays: you can find all options explained by "docker run --help"
1:44 pm Armays
ok Zerberus i did the tutorials, but i dont understand the ':'
1:47 pm Zerberus
Armays: the ':' used where? for the volume or in the image tag?
1:47 pm Armays
for the volume
1:48 pm Zerberus
Armays: it seperates the location on the docker host and the location inside the container
1:49 pm Armays
Zerberus the docker host is /srv/gitlab-runner/config or /etc/gitlab-runner ?
1:50 pm Zerberus
Armays: /srv/gitlab-runner/config
1:52 pm Armays
Zerberus it means that /srv/gitlab-runner/config is copied and used as a data volume ?
1:54 pm Zerberus
Armays: it is a bind mount
2:00 pm Armays
ok Zerberus this is the tutorial of installing git lab multi runner, i dont understand because at the beginning /srv/gitlab-runner/config doesnt exist ?
2:02 pm Zerberus
Armays: https://docs.docker.com/engine/tutorials/docker...
2:04 pm Armays
thanks Zerberus i reead about bind mounts
2:17 pm crashev
is there reliable alternative to osxfs ready in docker? osxfs is slow that it's totaly unusable
2:18 pm gpkfr
good Question.
2:21 pm Armays
for o gitlab runner installation, should i use always or unless stopped ?
2:27 pm
-- BotBot disconnected, possible missing messages --
2:30 pm
-- BotBot disconnected, possible missing messages --
2:33 pm
-- BotBot disconnected, possible missing messages --
2:35 pm proti
If you don't want an interactive process (which means a background process, like httpd) there is very little use case where you don't need the -d.
2:36 pm Armays
proti i have to install gitlab multi runner, i wanted to install it with docker, so that gitlab build images of my repos for continuous integration
2:37 pm
is my case approriate for background ? if yes why ?
2:37 pm rahlquist
hey everyone, currently on 1.13.1, build 092cba3 on ubuntu 16.04, I dont see any bugs listed but for some reason autocomplete has stopped working when I try to use it for container id at the command line, anyone else had this issue?
2:53 pm Zerberus
Armays: you nearly want to run your long-run containers in detached mode, like you would that with an apache service or your mail server process
2:59 pm Armays
ok Zerberus thanks
3:00 pm
I am new to continuous integration and I want to improve myself, can someone help me ? I can pay for assistance
3:13 pm gebbione
is there a best practice lifecycle to mount files in containers with volumes? i realise i might not need to replace containers or always build images if the only change is a file change
3:13 pm Armays
sorrry connection issues
3:41 pm lordjancso
how can i specify (override) the project name?
3:42 pm
now the container name generated by {directory-name}_{service_name}_{service_number}
3:42 pm
and directory-name is the project name
3:42 pm
how can i override that part?
3:42 pm
in a docker-composer file
3:42 pm telling_
Theres an environment variable you can set
3:43 pm telling
lordjancso: https://docs.docker.com/compose/reference/envva...
3:43 pm lordjancso
telling i've found this variable but where should i put this into my compose file?
3:43 pm telling
lordjancso: you sholdnt
3:43 pm
shouldn't
3:43 pm lordjancso
then?
3:43 pm telling
Its in your environment on your host.
3:44 pm lordjancso
yeah but i want to bound this name to the project
3:44 pm telling
Its read by the docker-compose binary you invoke
3:44 pm lordjancso
because now i have a docker-compose.yml and a docker-compose-dev.yml file
3:44 pm
with the same services but different config
3:44 pm
and i cannot test both on a single machine because its name is the same
3:45 pm telling
Okay, I've now explained HOW you do this. I cant do anything about what YOU want, you'll have to fix it on your own, in your environment.
3:45 pm thethorongil
how can i make container in my compose fie or another container by their hostname ?
3:45 pm
*search
3:46 pm proti
Armays if you want to use gitlab multirunner for QA test, then you don't need the -d
3:47 pm lordjancso
telling but now do you understand my problem. do you know some solution?
3:47 pm larson
hey folks
3:47 pm
i'm trying to use docker-machine to control the containers running on my linux server
3:47 pm
$ docker-machine create --driver none --url=tcp://the_host_ip:80 servername
3:47 pm
output said the machine had been created
3:47 pm
then I run this
3:47 pm
$ docker-machine env servername
3:47 pm
and i get this error:
3:47 pm
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "the_host_ip:80": dial tcp the_host_ip:80: getsockopt: connection refused
3:47 pm
any thoughts?
3:49 pm telling
larson: driver none?
3:49 pm
I dont see that driver in here: https://docs.docker.com/machine/drivers/
3:50 pm larson
telling: i got the inspiration from here
3:50 pm
https://docs.docker.com/machine/get-started-cloud/
3:50 pm
section "Adding a Host without a driver"
3:50 pm
$ docker-machine create --driver none --url=tcp://50.134.234.20:2376 custombox
3:50 pm thethorongil
http://pastebin.com/F64sjiPZ this is my compose file , now I want to give the hostame of choreographer container to the env vale in console section.
3:51 pm larson
telling: so `none` is a legit option
3:51 pm thethorongil
what value should I give, cause it's not working if I define the hostname and putting the value their?
3:52 pm telling
larson: right. I cant help sorry, theres no documentation for what that does as far as i can see.
3:52 pm reynierpm
hi, morning
3:52 pm larson
telling: ok, thanks anyway. anyone else has an idea ?
3:58 pm thethorongil
https://github.com/docker/machine/issues/3267
3:58 pm
larson: this may help you !!
3:59 pm telling
larson: i really think driver none was a mistake. You should've used the generic driver, it would've setup your certs etc for you.
3:59 pm larson
thetorongil: thanks, but that issue is not relevant as go is not involved in my project
4:00 pm thethorongil
or firewall , maybe !!
4:00 pm larson
telling: ah THE `generic` driver
4:00 pm
i'll try that
4:03 pm
right, he seems happy with that
4:04 pm philm88
Hey all. Looks like something changed without how docker deals with syslog when I wasn't looking. I have a syslog config that's meant to pull the docker logs out into a separate log file. It just does this; https://pastebin.mozilla.org/8979146 -- unfortunatly, it looks like the program name isn't Docker anymore. Instead, programname is the container ID. What's the correct way to target logs ...
4:04 pm
... coming out of docker now with syslog?
4:09 pm widgetpl
has anyone encountered such issue with newst docker https://gist.github.com/widgetpl/2d6e90e01f2eca...
4:09 pm tuudik
Hi! Anyone can recommend any cheat sheets? something to print out on the paper, for docker?
4:12 pm widgetpl
it happened when influxdb ate whole RAM
4:12 pm
container was running without any mem limitations
4:13 pm
right now i ahave docker container in unhealthy status and I am not able to remove it
4:19 pm
brimston3 is now known as brimstone
4:23 pm donkeycongo
Can I add a hostname to a service configured with docker-compose?
4:27 pm rickardo1
I need to access host user id during docker-compose build but can't figure or google out how to do that.. anyone?
4:32 pm sjj
rickardo1: you could pass it in as a build argument and use an environment variable.
4:32 pm nschoe
Hi back
4:33 pm rickardo1
sjj: Yes, but not automatically.. I don't want new developers to change the compose file
4:33 pm sjj
rickardo1: why would they need to change the compose file?
4:34 pm rickardo1
sjj: I tried php: -> build: -> args: - 'user_id=${UID}' but it's not set - 'user_id=1000' work though
4:35 pm sjj
rickardo1: is that environment variable set in the context in which you're invoking docker-compose build?
4:38 pm
(if it's not being interpolated then the answer is no)
4:47 pm rickardo1
sjj: yes
4:47 pm
sjj: echo $UID returns 1000
4:53 pm sjj
rickardo1: so what?
4:53 pm
rickardo1: $UID is a shell builtin variable, that doesn't automatically make it an environment variable.
4:54 pm ash_workz
is there a way to execute a litany of dockerfile commands if a ARG is est?
4:54 pm
set*
4:55 pm rickardo1
sjj: ok, but do you get what I am looking for here? To get host uid during build to setup a user with same uid on container
4:55 pm sjj
rickardo1: well I told you how to do it
4:56 pm ash_workz
something like https://gist.github.com/ash-m/54549c4d561346cc4...
4:57 pm donkeycongo
Can I add a hostname to a service configured with docker-compose?
5:01 pm sjj
donkeycongo: https://docs.docker.com/compose/compose-file/#/...
5:09 pm donkeycongo
sjj: Isn't that applited inside the container and not on the host system?
5:09 pm sjj
donkeycongo: yes, that's what I understood your question to be asking.
5:11 pm donkeycongo
sjj: My bad. What I want is point a domain to the container, so I don't have to update my hosts file.
5:11 pm sjj
if you want an entry added to the host's /etc/hosts file you'll have to add it yourself.
5:13 pm
mgoldmann is now known as mgoldmann|away
5:13 pm donkeycongo
I ran another docker setup that automatically pointed m2.localhost to 127.0.0.1 so it hit the container without me having to update my hosts file
5:18 pm clouddig
If I RUN something in a Dockerfile that modifies the filesystem, will those modifications be saved in the resulting image?
5:19 pm steerio
That's the whole idea of it
5:19 pm clouddig
Then I'm stumped http://stackoverflow.com/questions/42208442/mav.... I have a simple Dockerfile, but the downloaded dependencies are not being cached in the image.
5:20 pm
I suppose it's possible that "mvn dependency:go-offline" isn't actually saving them to the filesystem like I think it should...
5:21 pm steerio
I wouldn't go guessing by size, you could try to verify if they're there using a shell
5:22 pm
docker run -ti my-maven bash
5:22 pm
you should be able to navigate around and check for those downloaded files
5:23 pm
(you might also want to add --rm to that command line, so the container you created just for looking around with the shell doesn't stick around)
5:24 pm ash_workz
can I use docker hostnames in postgres commands?
5:25 pm clouddig
steerio: good idea. I don't see the dependencies cached where I expect them. I'll explore the idea that I don't really understand that maven command
5:25 pm ash_workz
when I try to run `pg_basebackup ... -h <primary_hostname>` it says it can't find it
5:25 pm steerio
clouddig: you can even run that command in that bash shell
5:26 pm
clouddig: and then see if it even works in the first place
5:33 pm clouddig
steerio: the command does work (both from a bash and when run as part of docker build). When I run it in bash, I see the files in the /root/.m2/ directory, where I expect them. I'm still puzzled that these files wouldn't remain after the docker build process.
5:34 pm Spec
clouddig: what's your dockerfile look like
5:35 pm clouddig
http://stackoverflow.com/questions/42208442/mav...
5:35 pm
Spec: It's just five lines as shown in that link to SO
5:37 pm Spec
oh
5:37 pm
it's cause your parent is setting VOLUME "$USER_HOME_DIR/.m2" i'd guess
5:37 pm robak
hi everyone
5:38 pm Spec
clouddig: see comment regarding .m2 here: https://hub.docker.com/_/maven/
5:40 pm robak
I've php/node/angular/mysql application that I am trying to dockerize - I've all internal things working fine and connecting to each other via 'links' but when angular (that executes in browser, so outside of the containers, on the host) tries to access the docker 'links' it obviously fails, because it can't resolve 'web' or 'php'
5:40 pm
is there a way for docker to expose that to the host?
5:40 pm
or am I missing something obvious here?
5:41 pm Spec
you need to expose the port in question with -p
5:41 pm clouddig
Spec: So you mean https://github.com/carlossg/docker-maven/blob/3... see that now...So I don't understand how that VOLUME works. For the docker build process, is that directory actually a volume that gets lost?
5:41 pm robak
Spec: it is not about ports not being exposed, but about 'links' names/aliases being resolve-able only INSIDE containers
5:41 pm
but not on the container's host
5:41 pm Spec
clouddig: as far as i understand it, yes. it's so when you run the image you can pass in your shared container volume (or mount your user's .m2)
5:42 pm
clouddig: so you can just build and then use a shared volume once and then .m2 would be populated, it's weird they don't mention it in the "how to use this image" part
5:42 pm robak
Spec: while the app in container can just connect to 'web' or 'php' or 'mysql', the angular that executes in the browser doesn't know what those are and can't resolve these names
5:42 pm Spec
robak: yeah. angular is not "inside" the docker ecosystem, it's in your browser
5:43 pm
robak: so you need to expose the ports of the webserver it needs to talk to
5:43 pm
robak: so expose the port, and tell angular to hit that port
5:43 pm robak
Spec: again - this is not about ports not being exposed - ports ARE. It is the DNS, NAME RESOLUTION that doesnt work. when Angular want's to connect to web:8080 it fails because it can't resolve what 'web' is
5:44 pm Spec
yes
5:44 pm
stop using "web"
5:44 pm
point it to the IP of the container that has its port exposed
5:44 pm clouddig
Spec: If I add this to my docker run, it does use my local maven repository "-v ~/.m2/:/root/.m2/"
5:44 pm Spec
or add 'web' into your /etc/hosts as 127.0.0.1 if you must
5:44 pm
clouddig: yeap
5:44 pm
clouddig: that's the intention -- to reuse your dependencies
5:44 pm robak
Spec: I don't know what that IP is during container build
5:44 pm clouddig
Is there some way to unregister a VOLUME declaration from a parent image?
5:44 pm Spec
robak: you need to access angular while building?
5:44 pm robak
Spec: and these are in angular app config file
5:45 pm Spec
clouddig: i think that's a feature request that isn't complete, last i checked
5:45 pm
robak: if you're running it localhost, then use localhost:8080
5:45 pm
robak: if you need service discovery, you need to add some service discovery mechanism
5:45 pm robak
Spec: unfortunately, this won't work, because the same url is used by the app to talk to some different stuff
5:45 pm blablablub
hi everyone.. short question Our team is using docker as a development environment for compiling robotics software. Now we want to itnegrate SocketCAN library into our framework which is a linux module that integrates CAN with the linux network layer (the interface is then callable by "ifconfig" for example).. any idea how to do that with docker?
5:46 pm robak
so using localhost would mean the second part of the app talks to its own container
5:46 pm blablablub
or could u guys give me some hitns on how to implent this?
5:46 pm Spec
robak: what
5:47 pm robak
Spec: long story short, I can't do what you say. Now back to the original question, is there a way to expose links/aliases to the host?
5:47 pm Spec
why can't you do what i say?
5:47 pm
localhost:some_port will only go to one container
5:47 pm robak
Spec: no, it won't.
5:47 pm
imagine single variable
5:47 pm
imagine it being used in 3 containers
5:47 pm
result?
5:47 pm
all of them going to themselves, due to it being localhost
5:48 pm
instead all of them going to 'web;
5:48 pm Spec
only angular would be set to this...
5:48 pm robak
I can't 'only set it for angular'
5:48 pm
it is one variable used in multiple places
5:48 pm
I haven't wrote it, I can't change it, I need to deploy it.
5:48 pm Spec
ah
5:49 pm robak
so, I need to automatically expose the docker links to the host running docker
5:51 pm Spec
robak: i think you're gonna want to look into service discovery
5:52 pm ash_workz
docker is not as magical as it was made out to be to me. It doesn't know where a container is on a remote server
5:52 pm
how do I fix that?
5:53 pm programmerq
ash_workz▸ what?
5:53 pm
can you elaborate?
5:53 pm PCatinean
hey guys
5:54 pm ash_workz
programmerq: gladly! :)
5:54 pm PCatinean
I have an image that has inside a script and a .sql file. I also have a Docker compose file that bundles them togheter
5:54 pm ash_workz
programmerq: I have 2 docker-machines, and I want one to run pg_basebackup
5:54 pm
using the host of the other
5:54 pm programmerq
ash_workz▸ are both containers on the same overlay network?
5:55 pm ash_workz
programmerq: not 100% sure what that means
5:55 pm PCatinean
What happens is that when running the mysql image it sets a volume for /docker-entrypoint-initdb.d effectively adding the .sql file there which imports tha database
5:55 pm
If I push the image as is on docker.hub, will this still work? Is the build context preserved?
5:55 pm programmerq
ash_workz▸ docker has multi-host networking. you can have containers on multiple hosts all on the same network.
5:55 pm rayha_
QQ anyone got any experience with docker-machine syntax for vmware vsphere?
5:55 pm ash_workz
programmerq: well, they're all on the same physical network; and atm, both docker-machines are just running locally
5:55 pm programmerq
PCatinean▸ build context is the tar that is sent to the docker daemon for the build step. it is discarded once the image is produced.
5:56 pm ash_workz
programmerq: but the nth goal would be to be able to use them remotely (ie: with a generic driver)
5:56 pm programmerq
PCatinean▸ a volume is, by definition, not part of a container's filesystem at all. when you push an image, it will only push the layers for the image
5:56 pm PCatinean
programmerq, then what does the docker-compose do when I do this on a fresh machine and pulling the images from the hub?
5:56 pm programmerq
not a container's layer, and definitely not any volumes attached to a running container.
5:56 pm PCatinean
hmmm that does make sense
5:56 pm Besogon
hello there
5:57 pm programmerq
ash_workz▸ both machines running locally can still participate in a multi-host network
5:57 pm ash_workz
k, I'll look it up
5:57 pm PCatinean
Still how do I preserve this behavior so when someone runs docker-compose, the .sql file is taken from somwhere and added to the mysql image
5:57 pm programmerq
PCatinean▸ docker-compose will pull the image, and create a new container with a volume (depending on what your docker-compose.yml file looks like)
5:57 pm PCatinean
thus running it and effectively restoring the database
5:57 pm Besogon
it's silly question.. apache inside docker and my user have different permissions.. How to fix that?
5:58 pm bbigras
Is it possible to use the new secrets management feature in a stack file?
5:58 pm programmerq
ash_workz▸ as of docker 1.13, you can set up a swarm mode cluster using the 'docker swarm init' and 'docker swarm join' commands. then you can create a network like this: docker network create -d overlay --attachable foobar
5:58 pm ash_workz
programmerq: the easiest way to do this is run my machines in swarm mode?
5:58 pm PCatinean
programmerq, https://hastebin.com/pejobonepe.http
5:58 pm ash_workz
programmerq: That isn't advisable though with DB containers, right?
5:58 pm programmerq
ash_workz▸ if you don't want swarm mode, then you can set up an external kv store and configure both docker engines to use that kv store for the cluster discovery stuff.
5:59 pm
ash_workz▸ why not?
6:00 pm ash_workz
programmerq: because doesn't the deployment measures with swarm mode allow data volumes to be mismatched (or something)
6:01 pm programmerq
ash_workz▸ you can still use 'docker run' and be specific about what node you are scheduling a container on.
6:01 pm
the --attachable option for 'docker network create' allows you to attach a regular container to a network in swarm mode.
6:01 pm
so you'd basically be using just the multihost network discovery part.
6:02 pm ash_workz
programmerq: I guess that's only if you define replicas
6:02 pm
programmerq: I mean, what I was talking about
6:02 pm programmerq
ash_workz▸ there's no such thing as a replica if you use 'docker run'
6:02 pm PCatinean
programmerq, I will just make a simple repository with the script and dump and docker image then
6:03 pm ash_workz
programmerq: well, I'm using compose; but I don't have to use deploy right? that'll be essentially the same thing, right?
6:03 pm programmerq
a replica is a 'docker service' interface concept
6:03 pm
docker-compose uses 'docker run' under the hood, yes.
6:04 pm
ash_workz▸ honestly, it might be a good idea for you to take a look at universal control plane. it's got both swarm mode and swarm classic built in.
6:04 pm ash_workz
programmerq: I saw that recently, I haven't had time to really dive into it yet
6:45 pm
programmerq: just so you know, the exact same information on https://docs.docker.com/engine/reference/comman... is repeated under examples
6:45 pm
programmerq: is --subnet arbitrary?
6:56 pm programmerq
ash_workz▸ it's optional
6:56 pm ash_workz
programmerq: but is it?
6:58 pm programmerq
...yes
6:58 pm
if you don't specify, docker will choose one for you.
7:01 pm ash_workz
why is swarm giving me an error when I try to join a worker? Error response from daemon: rpc error: code = 14 desc = grpc: the connection is unavailable
7:01 pm programmerq
ash_workz▸ did you pass it an ip address that is routable?
7:02 pm ash_workz
programmerq: I pasted the command rendered by the docker swarm init command
7:02 pm programmerq
okay
7:02 pm
was the ip in that command routable?
7:03 pm ash_workz
I don't know how to discern that o.o;
7:08 pm
apparently the answer was no, and I had to use an ip from another interface
7:24 pm kireevco
I have a very special usecase, I need to pretend like I have systemd so the service starts natively. Is there a best way to do this?
7:25 pm
(background: I'm using ansible-compose to install influxdb from a deb package, which installs itself and configures systemd service)
7:26 pm programmerq
kireevco▸ just set the CMD to reflect what's in the ExecStart of the systemd unit.
7:26 pm
run your thing directly
7:27 pm kireevco
pocketprotector: that would not be a pure, as my ansible task ensures systemd is started
7:27 pm programmerq
kireevco▸ you're running ansible during the 'docker build' phase?
7:28 pm kireevco
programmerq: ansible-container is building my docker image, yes.
7:28 pm programmerq
kireevco▸ does anible have any sort of "container build mode" where it'll just return on service running checks?
7:28 pm
I ran into a similar issue when I was experimenting with building an image using chef-solo (about 2 years ago)
7:30 pm kireevco
Well, I could, I guess add a flag "in_contaner: true" that would mean make it not use systemd and use direct command.
7:30 pm
programmerq: but I wanted to avoid it
7:31 pm
ah... but then ... when I run a contaier, there is nothing installed, so it won't work either.
7:31 pm
hm...
7:32 pm
alright.
7:36 pm croberts
do i need a physical box to run containers
7:37 pm programmerq
you can run containers on a VM too
7:37 pm croberts
cool
7:38 pm programmerq
containers aren't virtualization-- they're normal processes being run by your kernel, but get some isolation via namespaces and cgroups.
7:39 pm pocketprotector
yeah what he said
7:43 pm iAmerikan
is it standard to run containers within vms even on an ec2 instance or something
7:44 pm programmerq
iAmerikan▸ yes
7:45 pm
iAmerikan▸ if you are running containers, you may not need to run as many VMs though. Many folks find they can run fewer/larger VMs
7:55 pm phutchins
Anyone have a solution for public containres that would contain statically compiled assets with sensitive data in them? I'm considering looking into something like encrypting the statically compiled assets and passing the decryption key to the container via env variable but wondering if there is a better solution...
7:55 pm iAmerikan
what's the benefit using the vms if the instance is solely running containers
7:59 pm programmerq
iAmerikan▸ there's good management tools around provisioning VMs. It depends on what you consider better/easier I guess.
7:59 pm
it's certainly not unheard of for folks to run containers on physical machines
7:59 pm
but it certainly isn't a *requirement*
7:59 pm iAmerikan
sure sure. out of curiosity, could you list a few?
8:00 pm
currently we're running hundreds of containers on ~10 machines
8:00 pm programmerq
it's quite easy to provision a VM in a public cloud, for one.
8:01 pm
or if I have a mixed workload environment, I'll want to be able to relatively easily run windows or linux or freebsd or whatever
8:01 pm
so having VMs in a case like that would be good
8:01 pm iAmerikan
gotcha, that all makes sense
8:01 pm dcarmich
I'm testing Docker on my Mac with macOS Sierra, and I can't ping the address of my first container successfully even when I can ping out from the container itself. What could cause this?
8:01 pm programmerq
but if you definitely know you're on a 100% linux container workload, and doing provisioning of the physical hosts is something that isn't really a big deal, then running on metal is totally valid.
8:02 pm iAmerikan
how are you getting the address? dcarmich
8:02 pm programmerq
dcarmich▸ you won't be able to ping the container from macos
8:02 pm dcarmich
ifconfig within the container.
8:02 pm programmerq
dcarmich▸ if you want to reach a service running in a container, use the port publishing system.
8:02 pm
phutchins▸ that could work. there's also the secrets feature that came out in docker 1.13.
8:03 pm iAmerikan
programmerq: sure. we don't provision that often, and have ok amis (near todo is autoscaling), but i'm always interested in how other people are doing things
8:07 pm phutchins
programmerq: I'm building all of this in Kubernetes clusters so I have secrets handled there...
8:20 pm Bejjan
my god windows is giving me a headache building images
8:21 pm
I've not been this frustarted in years
8:33 pm Omnifarious
So, whenever I read a Docker manual, it tells me how easy it all is, gives me a few magic commands that do amazingly complex things under the hood, doesn't explain how any of the amazingly complex stuff works, and then finishes.
8:33 pm
I don't want to have Docker set up my VM for me.
8:34 pm
I have a perfectly good VM that I can install a docker daemon on with a package manager.
8:34 pm
And I want to use that because I want to be able to peek into exactly what the stupid thing is doing.
8:34 pm
How do I install the docker command on OS X and connect it to a docker daemon running in some arbitrary place?
8:35 pm programmerq
Omnifarious▸ docker is a client/server model. Say I'm running an arbitrary ubuntu VM on vmware or virtualbox
8:35 pm
I'd need to set up TLS on that docker daemon (link coming)
8:35 pm
Omnifarious nods.
8:35 pm
and put the client cert where my docker cli client runs (on my mac)
8:35 pm
you set env vars for the client that tell it where the server is, and what certs to use
8:36 pm
and then it can communicate with the remote daemon.
8:36 pm
(grabbing that link now. more to come after that)
8:36 pm Omnifarious
OK, so it uses ssh-style key authentication. Does it have any ridiculous requirement to have your cert signed by some arbitrary authority?
8:37 pm programmerq
it isn't ssh-style-- ssh uses bare keys. Part of the setup is specifying an authority. 99% of the time, you just use your own authority (you run a few openssl commands to get everything)
8:37 pm
https://docs.docker.com/engine/security/https
8:37 pm
Omnifarious sighs deeply.
8:37 pm Omnifarious
Alright.
8:37 pm programmerq
now, since that's a bit of a pain
8:37 pm
there's a project called docker-machine
8:37 pm
it's job is to provision docker and set up TLS
8:37 pm
it has a variety of drivers.
8:37 pm
Omnifarious nods.
8:37 pm
many people use the virtualbox driver, which will get you a virtualbox VM using a mini distro with little more than docker itself.
8:38 pm Omnifarious
Most of those drivers seem geared towards creating a whole VM for you.
8:38 pm programmerq
there's also the generic driver
8:38 pm
I use the generic driver for manually provisioned VMs and bare metal
8:38 pm Omnifarious
Can I get a list of drivers from the docker-machine drive?
8:38 pm diegoaguilar
I hve a docker container with redis installed
8:38 pm programmerq
you pass the generic driver ssh credentials and an ip
8:38 pm Omnifarious
Err, command
8:38 pm diegoaguilar
it just keeps halting ...
8:38 pm
the redis instance https://gist.github.com/diegoaguilar/35d8258c8e...
8:39 pm
any clue? dont think its exactly docker related but just in case
8:39 pm programmerq
Omnifarious▸ here's the list of bundled drivers: https://docs.docker.com/machine/drivers/
8:39 pm kraan
Hi all. How is it possible that Docker keeps track of old and removed macvlan networks/interfaces? e.g. interface mv1 and mv2 are still present using "ip link list" while those networks were defined and removed earlier?
8:39 pm Omnifarious
programmerq, yay! There's a step towards documentation written for someone like me.
8:39 pm programmerq
diegoaguilar▸ docker run --sysctl key=value
8:40 pm
the biggest usability issue with using a local virtualbox VM for local development (docker or not) is the virtualbox shared folders performance issues.
8:40 pm
Omnifarious nods.
8:40 pm Omnifarious
I do not like shared folders for that reason, and use them very sparingly.
8:41 pm programmerq
another issue with using a VM for development is that it doesn't match the "linux native" experience of using docker. for example, you can't do 'docker run -d -p 80:80 nginx && curl localhost'
8:41 pm Omnifarious
I actually have a whole dev environment set up on the VM use X11 forwarding.
8:41 pm programmerq
so that's where the docker-for-mac project comes in.
8:41 pm
no virtualboxsf (it uses its own thing that supports filesystem events), it has automatic port forwarding, etc...
8:42 pm
that's the motivation behind docker-for-mac anyway
8:42 pm
it's certainly not a way to learn what's going on under the hood
8:42 pm
Omnifarious nods.
8:42 pm
it's goal is to get out of the way and let you get some work done.
8:42 pm diegoaguilar
programmerq, I dont exactly understand what u suggest me to do by `docker run --sysctl key=value`
8:42 pm Omnifarious
I don't mind tools like that. But I want to use them after I know what's really going on.
8:43 pm programmerq
diegoaguilar▸ your error message is complaining about a sysctl value not being correct. you can set those at container create time with the --sysctl flag.
8:43 pm diegoaguilar
oh .. hmm I have no idea what that value should be
8:43 pm programmerq
diegoaguilar▸ it's literally in the error message?
8:45 pm diegoaguilar
duh,right
8:46 pm ghollies
I have a few machines (v1.13) that have gotten into a state where containerd is continually restarting, leading to rpc connection failures (cant interact with containers via stop/exec/health checks ect). Looking at journalctl I see containerd restarting with a new PID and then dieing with a fatal error that it cant find /var/run/docker/libcontainerd/containerd/<container-id>/<random-id>/process.json: no such file or directory Has an
9:09 pm pron
are secrets usable only with swarm ?
9:10 pm programmerq
pron▸ yes.
9:10 pm pron
why ???
9:11 pm programmerq
iirc, it depends on the swarm mode cryptographic node identity.
9:11 pm Omnifarious
Now that I've used docker-machine to set up an existing VM with keys to be controlled by a local instance of the docker client, how do I get that client installed?
9:12 pm programmerq
Omnifarious▸ I think brew has it
9:12 pm pron
programmerq: so what is proposed solution for non swarm docker?+)
9:12 pm programmerq
pron▸ same as it always has been. you can put secrets into a volume or an env var
9:12 pm
or combine one of those with an external secret management system
9:12 pm Omnifarious
So I have to get brew running on my OS X box again. I upgraded from 10.8 to 10.12 (I was away for a long time) and brew broke. :-)
9:12 pm pron
env is a bad plae fr that
9:12 pm programmerq
I think I've seen a vault volume driver
9:12 pm pron
and i dnt know what it has always been
9:13 pm
using it for maybe a onth now
9:13 pm
and was happy to find secrets
9:13 pm programmerq
pron▸ sounds like a volume is going to be your best bet.
9:13 pm
or, consider using swarm mode
9:13 pm pron
but since swarm is bult in i dont see reason why not give secrets to regular docker
9:13 pm
:P
9:14 pm Nilbus
except that "it depends on the swarm mode cryptographic node identity," which doesn't exist outside of swarm.
9:15 pm pron
well
9:15 pm
it lets me create secrets :P
9:15 pm
but thx for answers
9:17 pm Nilbus
Basically, it's built and kept secure using all the things that swarm provides. Providing secrets outside swarm would have to be a rewrite of secrets, doing something completely different. programmerq suggested some good alternatives though.
9:23 pm pron
is there a way access local image from swarm node?
9:36 pm Nilbus
pron: no, but you can run an image in a swarm service on only one specific node.
9:37 pm pron
yes it ws just giving a confusing error but it runs well sinc ei need it for testing out it sgood enuff
9:38 pm
but is confusing as hell
9:39 pm tejasmanohar
what does network i/o mean in docker? is that how much data has come in side-by-side how much data has come out
9:39 pm
*in docker stats
9:39 pm
or some measure of throughput
9:58 pm Nilbus
I'm struggling to get a node (outside AWS) to join and participate in my swarm (set up using Docker for AWS) ingress network. I have opened ports 7946,4789,4789 tcp/udp incoming via a new security group to my one manager+worker node, as described at https://docs.docker.com/engine/swarm/ingress/. The node joins the swarm but doesn't join the ingress routing netowrk. What should I investigate to see what's
9:58 pm
wrong?
10:15 pm darkl0rd
Hey guys, anyone around?
10:18 pm programmerq
Nilbus▸ "doesn't join the ingress routing network" - how are you making that determination?
10:25 pm Nilbus
Although `docker service ps <service>' and `docker node ls' both report both nodes, `docker network inspect ingress' on the manager reports that there is only 1 peer, itself. Also, the http service includes the hostname (container id) as an HTTP header, which is always the same, the manager node's container id.
10:27 pm Schwarzbaer
Hi. I'm currently using Docker mostly to test things locally on my machine. While trying to set up influxdb and telegraf today, I became a bit overwhelmed, owed in part to the fact that --link is apparently deprecated. How do I start those containers so that telegraf can connect to influxdb *and* can be accessed locally via :8125/UDP?
10:27 pm Nilbus
The service is scaled to 2, one for each node in the swarm, and there is one task running on each node, according to docker service ps.
10:28 pm programmerq
Nilbus▸ the ingress network is a special network. each node should have its own local ingress network.
10:33 pm Nilbus
programmerq: interestingly then, the worker node outside AWS has no ingress network listed by `docker network ls'
10:35 pm programmerq
Nilbus▸ do they have any services running on them?
10:36 pm
are those services connected to an overlay network?
10:36 pm
is there a specific problem you've trying to troubleshoot?
10:38 pm Nilbus
programmerq, yes. The service is running on both nodes, as indicated by docker service ps. When I connect to the service's port, it is always load-balanced to only the manager node, never utilizing the 2nd worker node. That's the problem I'm trying to resolve.
10:39 pm
Is the service connected to the ingress network? I believed that to be the default with --publish 80:80, since I'm not specifying --publish mode=host
10:59 pm FSanches
I'm new to docker and I'm having trouble to understand a few things. Suppose I have a container that clones a specified git repo and runs a testsuite on it, then saves the results to a database and then the container goes down.
10:59 pm
If I have a list of hundreds of git repo URLs, how could I schedule several containers to run at once, each dealing with a different repo from my list?
11:00 pm
I'm not sure what to use for the scheduling of the same image with a customized parameter (like the git repo URLs)
11:01 pm
seems to be an orchestration need, but I still do not understand a lot about the orchestration tools. I'm trying to use Google Container Engine for this, by the way.
11:03 pm Nilbus
FSanches, sounds like you could set up an entrypoint to be a script that takes the git repo URL as an argument (and perhaps other arguments), and pass in that URL argument via docker run. The CMD arguments to docker run get passed as arguments to the entrypoint, when entrypoint is used in the Dockerfile.
11:05 pm FSanches
Nilbus: ...but then I would have to manually invoke docker run ? Can't I schedule it to dispatch a bunch of containers once per day?
11:07 pm Nilbus
"manually" via a script, you mean?
11:07 pm FSanches
yeah
11:08 pm
I mean... I have a fixed list of repos I'll be monitoring and I need it to be kept up-to-date.
11:08 pm
Perhaps there's a way for git commits on github to signal my dashboard whenever there's new code to be checked, but for now I'd be happy to rerun the testsuite on every repo daily.
11:09 pm Nilbus
docker doesn't have any sort of native cron built in for launching containers on a regular interval, if that's what you mean
11:09 pm FSanches
(i'm implementing a dashboard to monitor the progress of several git projects based on my testsuite running on containers in parallel)
11:10 pm Nilbus
are you trying to distribute the containers across several nodes without having to decide where to run docker run, or what's the real challenge you're trying to resolve?
11:12 pm FSanches
Nilbus: I'm new to this, sorry if my question is too simple/basic. I am unsure how to dispatch on a regular basis several containers which all perform the same kind of task (running a testsuite on a git checkout) but having each container deal with a different repo URL
11:13 pm
It's unclear what orchestration technique would be good for that kind of task
11:13 pm
I want the service to run on its own triggering the full re-run of the testsuite from time to time
11:14 pm
of course I'll have a container for the webapp that reads the check-results from the database and displays it to the users, but that's pretty trivial
11:15 pm Nilbus
I see. You're asking about what might be good for kicking off those jobs: storing the list, looping over the repos, kicking off containers with the right ENV or args.
11:15 pm FSanches
what's unclear to me is what are the best practices to dispatch these background jobs for crunching the data
11:16 pm
Nilbus: Spot on! That's exactly what I'm looking for!
11:19 pm Nilbus
The only thing that comes to my mind is Jenkins. It has a cron plugin. You could create a cron job that runs a bash script: pull a list of repos from somewhere, loop over those, and run a docker container for each. Perhaps you could run each container in a different downstream job, though I'm not sure how.
11:22 pm dhscholb
i'm having a little trouble with imports/exports. I want to export some images that i had previously imported with "docker import - <image-name>", but i can't figure out what the specific container ID for "image-name" is
11:24 pm
basically i need a way to map the image-name used in the import command with a container ID that i can use in the export command
11:24 pm Nilbus
programmerq: It seems like normal behavior would be for that ingress network to be created whenever a service task is scheduled on a node, right? I'll search around a little to see if anyone else has seen that not happen.
11:28 pm programmerq
Nilbus▸ that is what I expect the behavior to be.
11:29 pm
dhscholb▸ import creates an image from a tarball. export creates a tarball based on a container's filesystem. normally if you want to copy an image from one host to another, you'll want to use save/load (or push/pull)
11:30 pm
dhscholb▸ when you do the 'docker import', you can specify a name for an image. you can then do a 'docker save' using that name.
11:34 pm dhscholb
programmerq: "docker save" was indeed what i needed, thank you
11:34 pm Nilbus
programmerq, would there be anywhere to look for error log output as it fails to create that?
11:35 pm programmerq
Nilbus▸ the docker daemon log itself.
11:35 pm
Nilbus▸ without digging in, it does sound like your gossip protocol ports aren't open. you listed some ports above.
11:35 pm
but I don't remember which ones
11:37 pm
Nilbus▸ https://docs.docker.com/engine/swarm/swarm-tuto...
11:37 pm
I couldn't find which ports you said you had open in scrollback.
11:38 pm Nilbus
thanks
11:39 pm programmerq
Nilbus▸ it'd probably be a good idea to start tossing as much info as possible into a gist or something.
11:58 pm FSanches
hey, Nilbus! Thanks for the tips! I was away, watching youtube tutorials :-D
February 14th, 2017

12:14 am chilli0
Hello :)
12:14 am
A quick question, I am using docker-compose to start a container
12:14 am
how can I delete all data from the container so it will restart fresh? Do I need to delete the images? or is there another way
12:16 am daniel331
hi all. I'm having a problem linking docker containers on a non-default network. I ran this command: docker run -it --name uk.org.company.sql --net docker_sqlnet --rm debian bash. then I ran this command: docker run -it --net docker_sqlnet --name yyy --rm debian bash. then expected to be able to ping uk.org.company.sql -- but it just says "Unknown host". any ideas?
12:18 am jp11
chilli0: just delete the container, look at docker-compose down --help
12:18 am chilli0
jp11, thanks for that
12:18 am daniel331
I've also tried with --link uk.org.company.sql:sql, but nothing I do seems to establish the link. (I understand according to the docs that --link is no longer nessesssary)
12:20 am programmerq
daniel331▸ that should work, although seeing a container name with so many dots in it is not so common. do you get the same behavior if you do dashes instead?
12:23 am daniel331
@programmerq: well it's a bit strange because I'm testing ansible scripts and the same configuration worked just yesterday!
12:23 am
but to answer your question, no, anything that has no dots and doesnt look like an FQDN works
12:23 am
which is problematic because, well you know -- "company policy"
12:24 am programmerq
your company names containers with dots in them?
12:24 am daniel331
reverse FQDN notation matching a DNS where applicable. for "convenience".
12:24 am
(like a package name)
12:26 am
so I'm not really sure what to do about this...
12:26 am
should I file a bug?
12:28 am chilli0
I am trying to setup a databse with mysql ( mariadb) , is there anyway to pass a .sql file for it to load up ?
12:29 am daniel331
chilli0: ussually its done by mounting a directory containing the sql file to a certain location at runtime. have you checked the container documentation ?
12:29 am chilli0
daniel331, Yes I have read it, I found you can pass a configuration file to it, but can't see where I can pass a db file
12:30 am daniel331
programmerq: ah. now here's something interesting. 1.12.16 on fedora manages to link them fine. 1.12.3 on ubuntu 16 has problems. maybe it was a fixed bug...
12:32 am
chilli0: if you're using the official mariadb image you can mount the directory to /docker-entrypoint-initdb.d in the mariadb container. any SQL files in there will be automatically imported. if you're not using the official mariadb container, well, it's often a good idea to do that.
12:32 am
(source: https://hub.docker.com/_/mariadb/ )
12:33 am chilli0
thanks daniel331 , I was reading the mysql documentation that may be why I didn't find that info
12:33 am daniel331
actually chilli0, it's the same for mysql. (that's how I knew.) #awkward :-P
12:34 am chilli0
daniel331, oh really, I couldn't find it on the doco
12:34 am daniel331
it's easily missed i guess

Brak komentarzy:

Prześlij komentarz