Trying to get Selenium Grid using standalone server 3.13 was kind of a pain. My Chrome remote webdriver calls would error with WebDriverException: Message: unknown error: DevToolsActivePort file doesn't exist
.
The solution of adding –headless and –no-sandbox to the arguments to ChromeDriver is easily found with a basic Google search but implementing that in Python wasn’t exactly straight-forward. In my reading of the docs it seemed that adding an array to the capabilities dictionary keyed as ‘args’ would pass those arguments on to the subsequent call to chromedriver. Testing showed that that was wrong.
Fortunately, the ChromeOptions object has a method that will convert it into a DesiredCapabilities object that can be passed to the webdriver.Remote instantiation. See below.
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
self.browser = webdriver.Remote(command_executor="http://seleniumhub.domain.fqdn:4444/wd/hub", desired_capabilities=chrome_options.to_capabilities())
I had a need to export individual keys from the keyrings used by our WebKDC and main WebLogin servers to ensure that our dev pool and production pool had a common key from which to pivot around. This allowed the previous keys used to still be valid but also provide the shared key to cut-over to our next-gen servers. Digging into the keyring.c library I noticed that most of the framework was already in the library and that only a couple accessory functions needed to be written for the wa_keyring binary. The patch for this is below.
From d824d427eff88c318877d64e0dfe3b2a56e4191f Mon Sep 17 00:00:00 2001 From: Greg Kuchyt <gkuchyt@uvm.edu> Date: Tue, 7 Jun 2016 08:44:37 -0400 Subject: [PATCH] Add export/import functionality of individual keys in wa_keyring --- tools/wa_keyring.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 68 insertions(+), 0 deletions(-) diff --git a/tools/wa_keyring.c b/tools/wa_keyring.c index e78f7fb..44ad3b0 100644 --- a/tools/wa_keyring.c +++ b/tools/wa_keyring.c @@ -32,6 +32,7 @@ Usage: %s [-hv] -f <keyring> list\n\ \n\ Functions:\n\ add <valid-after> # add a new random key\n\ + export <id> # export key by id\n\ gc <oldest-valid-after-to-keep> # garbage collect old keys\n\ list # list keys\n\ remove <id> # remove key by id\n\ @@ -273,6 +274,60 @@ list_keyring(struct webauth_context *ctx, const char *keyring, bool verbose) } } +/* + * Export the key at the given slot such that it can be imported into a + * keyring on a different server. Thus allowing keys to be synchronized + * across servers that have divergent key histories. + */ +static void +export_key(struct webauth_context *ctx, const char *keyring, unsigned long n) +{ + struct webauth_keyring *ring; + struct webauth_keyring *newring; + const char *newkeyring = "export_wakeyring"; + struct webauth_keyring_entry *entry; + int s; + + s = webauth_keyring_read(ctx, keyring, &ring); + if (s != WA_ERR_NONE) + die_webauth(ctx, s, "cannot read keyring %s", keyring); + + entry = &APR_ARRAY_IDX(ring->entries, n, struct webauth_keyring_entry); + newring = webauth_keyring_new(ctx, 1); + webauth_keyring_add(ctx, newring, entry->creation, entry->valid_after, entry->key); + s = webauth_keyring_write(ctx, newring, newkeyring); + if (s != WA_ERR_NONE) + die_webauth(ctx, s, "cannot write new keyring %s", newkeyring); +} + +/* + * Import one keyring into another keyring. This allows keys from another + * server to be synchronized with servers that have a divergent key history. + */ +static void +import_key(struct webauth_context *ctx, const char *keyring, const char *impkeyring) { + struct webauth_keyring *ring; + struct webauth_keyring *impring; + struct webauth_keyring_entry *impentry; + int s; + size_t i; + + s = webauth_keyring_read(ctx, keyring, &ring); + if (s != WA_ERR_NONE) + die_webauth(ctx, s, "cannot read keyring %s", keyring); + + s = webauth_keyring_read(ctx, impkeyring, &impring); + if (s != WA_ERR_NONE) + die_webauth(ctx, s, "cannot read keyring %s", impkeyring); + + for (i = 0; i < (size_t) impring->entries->nelts; i++) { + impentry = &APR_ARRAY_IDX(impring->entries, i, struct webauth_keyring_entry); + webauth_keyring_add(ctx, ring, impentry->creation, impentry->valid_after, impentry->key); + } + s = webauth_keyring_write(ctx, ring, keyring); + if (s != WA_ERR_NONE) + die_webauth(ctx, s, "cannot write new keyring %s", keyring); +} /* * Add a new key to a keyring. Takes the path to the keyring and the offset @@ -418,6 +473,19 @@ main(int argc, char **argv) if (argc > 0) usage(1); list_keyring(ctx, keyring, verbose); + } else if (strcmp(command, "export") == 0) { + if (argc != 1) + usage(1); + errno = 0; + id = strtoul(argv[0], &end, 10); + if (errno != 0 || *end != '\0') + die("invalid key id: %s", argv[0]); + export_key(ctx, keyring, id); + } else if (strcmp(command, "import") == 0) { + if (argc != 1) + usage(1); + errno = 0; + import_key(ctx, keyring, argv[0]); } else if (strcmp(command, "add") == 0) { if (argc != 1) usage(1); -- 1.7.1
We monitor various PeopleSoft environments and the ability to login to them with Webinject and Nagios. Recently after some updates on the PS side our login test were no longer working. When we performed our POST request with our login creds we were getting served a 200 to the sign-in page. This hinted at something missing in the request.
After some digging I identified that the PeopleSoft signin page is executing some javascript on form submission that sets a cookie in the browser before the POST request is built. This cookie is called PS_DEVICEFEATURES
and it enumerates the capabilities of the client’s browser (e.g. width, height, pixel density, geolocation, canvas, etc). Basically it’s performing a bunch of things modernizr could do, but putting it in a cookie.
Since Webinject is not a browser it’s not actually processing the response and it can’t handle form events; ipso facto this cookie isn’t being set. As well, Webinject doesn’t provide a way to set a new cookie that isn’t provided in a response header Set-Cookie
header.
So I hacked up a patch to Webinject that supports this
diff --git a/lib/Webinject.pm b/lib/Webinject.pm index ddb43af..6dc801a 100644 --- a/lib/Webinject.pm +++ b/lib/Webinject.pm @@ -904,7 +904,18 @@ sub _http_defaults { my $request = shift; my $useragent = shift; my $case = shift; - + + if($case->{'addcookie'}) { + my $cookie_jar = $useragent->cookie_jar(); + # add cookies to the cookie jar + # can add multiple cookies with a pipe delimiter + for my $addcookie (split /\|/mx, $case->{'addcookie'}) { + my ($ck_version, $ck_key, $ck_val, $ck_path, $ck_domain, $ck_port, $ck_path_spec, $ck_secure, $ck_maxage, $ck_discard) = split(/,/, $addcookie); + $cookie_jar->set_cookie( $ck_version, $ck_key, $ck_val, $ck_path, $ck_domain, $ck_port, $ck_path_spec, $ck_secure, $ck_maxage, $ck_discard); + } + $cookie_jar->save(); + $cookie_jar->add_cookie_header($request); + } # add an additional HTTP Header if specified if($case->{'addheader'}) { # can add multiple headers with a pipe delimiter
This enables a new test case attribute entitled addcookie
which takes a comma-delimited list of options that configure the cooke you wish to add. This patch is pretty bare bones and the fields in the addcookie
list are the exact arguments needed to be passed on to the underlying HTTP::Cookies::set_cookie
method with no type or error checking.
Example: addcookie="0,PS_DEVICEFEATURES,width:1680,/,host.domain.tld,0,1,1,86400,0"
Here’s the commit in the Webinject GitHub project
We’re just starting to get our feet wet with Puppet and we found the documentation woefully underwhelming. Here are our notes for getting a basic master/agent setup going with a “Hello world” manifest.
Basic Assumptions: SELinux is disabled
yum install puppet puppetserver
sudo puppet master --verbose --no-daemonize
Once Notice: Starting Puppet master version appears, CTRL+C
dns_alt_names = host.fqdn.tld,alt_name environment_timeout = unlimited environmentpath = $confdir/environments
/etc/puppet/environments/production/manifests/site.pp
with the following
notify {"Hello world!":}
service puppetserver start
/etc/puppet/puppet.conf
with the following
server = hostname.fqdn.tld environment = production
puppet agent --test
or start the puppet agent via service puppet start
puppet cert sign agenthostname.fqdn.tld
puppet agent --test
and look for Hello world noticeThe WebAuth WebKDC is able to get tickets in any realm it has a trust in but it is currently only able to verify credentials in one realm. In a two-way realm trust this works great. Realm A trusts tickets from Realm B, so if a principal in Realm B authenticates to Realm A’s WebKDC, the WebKDC in Realm A can get a TGT for Realm B. This breaks down when the trust is only one-way (i.e. Realm B trusts Realm A but Realm A does not trust Realm B).
At UVM we have a default realm for UVM and a new realm for former students. This separate realm prevents us from keeping users who should no longer be granted access to UVM realm resources in the UVM realm. There are however certain web application servrices (WASes) that we want to let both UVM and FORMERSTUDENT realm principals be able to access. In this case, FORMERSTUDENT trusts UVM but UVM doesn’t trust FORMERSTUDENT. Thus we have a WebKDC problem.
Since the WebKDC (in UVM) can currently only get tickets in its default realm, we wanted to instead be able to look at the WebKDC keytab and see if verification would be possible with any of the principals contained within the keytab. This is valid in Kerberos but just not in how WebAuth behaves. Thus a re-write of the credential verification process in WebAuth is required.
WebAuth opens the keytab specified in WebKDCKeytab
and retrieves either the first principal in the keytab or the principal specified by the optional parameter to the WebKDCKeytab
directive. So any patch will need to preserve the behavior of taking an optional specific principal and attempting verification with that. What we want is for the verification process to iterate through the keytab and see if there is a principal that matches the realm of the user principal. If there is, we then attempt verification and repeat if necessary until we’ve stepped through the keytab.
You can view the patch here for the one-way realm cred verification described above.
Now that we are a participating SP/IdP for eduroam we wanted to monitor the two top-level radius servers in use on their side. Their wiki suggests using the check_radius.pl plugin available on the Nagios plugin directory. I found it to be a little too limiting and not well-tailored for monitoring so I went ahead and made some modifications. Below is the source for it along with a summary of the changes I’ve made.
OK: (w:3;c:5;t:10) tlrs1.eduroam.us (0.056135 sec): OK; tlrs2.eduroam.us (0.08103 sec): OK
A simple yet salient detail about fanout exchanges…
Perhaps it was just my inattentive reading, but I was under the impression that a fanout exchange itself would transmit a received message to every host connected to the exchange. Au contraire, a fanout exchange will transmit a received message to every queue bound to the exchange. If hosts connect directly to the exchange, the exchange will deliver the messages in a round-robin fashion, which will be confusing if you’re expecting “broadcast” behavior.