Discussions
Explore the latest discussions related to this domain.
Generate Report for test cases with status Fail?
Main Post: Hi Guys! I am new to this tool. I am still studying the reports module but couldn’t create one that generates list of test cases that have been executed with status Fail and the related defects? Thanks in advance!
Top Comment: Hi Guys! I am new to this tool. I am still studying the reports module but couldn't create one that generates list of test cases that have been executed with status Fail and the related defects? Thanks in advance!
DMARC reports saying SPF alignment fail when it shouldn't & some vendors not recognizing FO tag
Main Post:
I'm experiencing three problems:
- Email providers are sending failed dmarc examples even though the source IP is in our SPF record
- Email providers are ignoring the fo=s tag in our record
- Only 1 or two email providers are sending forensic reports (probably a good thing right now..)
I've verified with MXtoolbox's spf check tool using our domain and the source IP address being reported as failed in the reports and it sees it as valid. The specific IP addresses are included under "include:spf.constantcontact.com" which we added 2+ weeks ago.
We don't have more than 10 DNS lookups. we have 2 includes & 1 a. 1 one of the two includes, includes a second dns record. So thats a total of 4 DNS lookups.
I've also verified that the DMARC record is correct with the fo tag. There are some email providers that list it at the top of the XML file (like Comcast) but Google, Yahoo & at least one other doesn't and then the report lists every single email they've received because it failed DKIM.
Reviewing the XML file the header_From domain matches ours but the envelope_from domain does not match our domain (this only shows on one of the reports most only have the header_from.) This google article states that with relaxed setting, it should still pass. We don't specify "aspf=r" but from reading this is default & confirmed in the reports as "<aspf>r</aspf>"
Lastly, I'm only getting one email provider that sending foresnic reports. Probably a good thing so Google isn't sending all...
Our dmarc record:
v=DMARC1;p=none;rua=mailto:[email protected];ruf=mailto:[email protected];fo=sHere's two example's from a Google report:
<policy_published> <domain>abc.com</domain> <adkim>r</adkim> <aspf>r</aspf> <p>none</p> <sp>none</sp> <pct>100</pct> </policy_published> <record> <row> <source_ip>208.75.123.194</source_ip> <count>1</count> <policy_evaluated> <disposition>none</disposition> <dkim>fail</dkim> <spf>fail</spf> </policy_evaluated> </row> <identifiers> <header_from>abc.com</header_from> </identifiers> <auth_results> <dkim> <domain>auth.ccsend.com</domain> <result>pass</result> <selector>1000473432</selector> </dkim> <spf> <domain>in.constantcontact.com</domain> <result>pass</result> </spf> </auth_results> </record> <record> <row> <source_ip>1.2.3.4</source_ip> <count>1</count> <policy_evaluated> <disposition>none</disposition> <dkim>fail</dkim> <spf>pass</spf> </policy_evaluated> </row> <identifiers> <header_from>abc.com</header_from> </identifiers> <auth_results> <dkim> <domain>abc.onmicrosoft.com</domain> <result>pass</result> <selector>abc-onmicrosoft-com</selector> </dkim> <spf> <domain>abc.com</domain> <result>pass</result> </spf> </auth_results> </record>The first record: As mentioned the IP 208.75.123.194 is in the "include:spf.constantcontact.com" in our SPF record. Is the domain listed under the SPF tag the envelope header domain? This would match with the forensic email I was sent from another mail provider. But should still pass based on the relaxed setting.
The second record is in the report because it failed DKIM but our DMARC tag has "fo:s" - Some providers are reading this correctly such as Comcast:
<policy_published> <domain>abc.com</domain> <adkim>r</adkim> <aspf>r</aspf> <p>none</p> <sp>none</sp> <pct>100</pct> <fo>s</fo> </policy_published>What am I missing here?
Top Comment:
I've created this tool https://DMARCtester.com to visualize the validation process to help you understand the (alignment) issue. DMARC will fail without alignment, make sure at least SPF or DKIM passes validation and pass alignment. Preferably both.
More information can be found here: https://www.uriports.com/blog/introduction-to-spf-dkim-and-dmarc/
r/u_GLLYResearch on Reddit: Groundless Bearish Reports on Veru Fail to Hinder Sabizabulin’s Fast-Approaching EUA
Main Post: r/u_GLLYResearch on Reddit: Groundless Bearish Reports on Veru Fail to Hinder Sabizabulin’s Fast-Approaching EUA
Reports stuck in Pending/Fail after upgrade of PAS vault/PVWA/CPM/PSM from 9.7 to 9.10
Main Post:
- We have multiple PVWA's.
- Per vendor we renamed the "D:\inetpub\wwwroot\PasswordVault\Bin" file and performed a repair install. Reports still not working.
- Per vendor we verified the presence of the icudt42l.dat file in "D:\inetpub\wwwroot\PasswordVault\Bin" and "C:\Windows\SYsWOW64".
- Vault trace logs show that reports are being created. We see reports in the PVWAReports safe but have a size of 0 bytes.
Has anyone experience this scenario?
Has anyone experienced the "Reports pending" issue and perhaps had another solution not mentioned above?
Thanks!
Top Comment:
Check the CyberArk Scheduled Task Service on the PVWAs, that's what runs the reports. My money says it isn't running.
DMARC reports showing SPF fail
Main Post:
I get reports that show spf fail and a domain similar to NAM03-CO1-obe.outbound.protection.outlook.com or nam01-bn3-obe.outbound.protection.outlook.com I use the standard txt record "v=spf1 include:spf.protection.outlook.com -all". Is there something I could do about this?
Top Comment:
So it's not because the IP ranges changing, the IP of those FQDN's are in their SPF records. The reason it's failing is because of forwarding. Someone has an auto-forward on their email. This will break SPF alignment, since it's showing as coming from the microsoft domain instead of your domain and therefore cause DMARC to fail.
Reports fail to load on first attempt, but then load on second
Main Post:
I’m sorry for any gaps in information here as I’m typing this while getting ready for a family gathering. I am having an issue where, when I try to load a report (whether I’m the console or web browser), it spins for a bit but never displays any parameter fields, then errors out stating it couldn’t connect to the data source, then another error about not loading parameters. I close it out and immediately relaunch and it works perfectly! Reports will then work for a bit until I let it sit for awhile, in which I have to repeat the process.
SQL is on another server which also hosts the reporting services point. This seems to have occurred after upgrading SQL from 2012 to 2016, which required me removing and reinstalling the reporting services point. I used a script to export custom reports and then import them back in, but this occurs for built in and custom reports, so I’m pretty certain it is unrelated to that.
The only thing I’ve seen in event logs on the SQL server around the time of attempted reporting was a potential DOS attack from the IP of the Primary, which obviously seems odd. I have manipulated a config file (rsreporting.config or something like that) to allow many more connections than what was the default. I’ve verified that the account used in SCCM for reporting services connectivity has the correct permissions on the CM DB (you’ll just have to trust me on that lol).
Anyone seen this? It’s driving me bananas! Again, sorry for anything I’m missing. On mobile and trying to get Cat in the Hat going on the TV for my kids haha.
Top Comment:
This sound like a performance issue. Generally I have only seen this issue when SSRS is remote to SQL and the network is busy.
When you upgraded SQL, did you perform a SCCM Site reset?
Have you checked your db compatibility level? https://www.enhansoft.com/does-sql-server-database-compatibility-level-matter/
Fail playbook run if any task reports "changed"
Main Post:
For an idempotence check I'd like to run ansible-playbook -C myplaybook.yml (maybe in combination with ANSIBLE_DISPLAY_OK_HOSTS=no) and have this command fail with a nonzero error code if any task in any play on any host reports "changed" (or "error", but Ansible handles this already).
Did I miss something in the documentation like a --fail-when-changed flag or is this really not possible without greping the whole play recap output for something like changed=0?
Molecule (who has a check for this built in) just seems to essentially grep the whole output for "changed" here: https://github.com/ansible-community/molecule/blob/7d05debca6b7708a3e2f28d8aaf47ce1bd2a831f/molecule/command/idempotence.py
How would you implement this as a simple command on the shell without abusing molecule into running playbooks against existing hosts instead of roles?
Edit:
Found this issue that's more related to ansible-test: https://github.com/ansible/ansible/issues/60226 I'd rather have this in ansible-playbook than ansible-test unless it is somehow possible to test(?) playbooks with ansible-test too?
Edit2:
test-kitchen also relies on grep for this functionality: https://github.com/neillturner/kitchen-ansible/blob/fa49caef12a85fb60f78cb266c7641cad2a64fe3/lib/kitchen/provisioner/ansible_playbook.rb#L436-L439
Edit3:
Using grep also has its downsides if you're interested in displaying which tasks actually are the problematic ones instead of just a PASS/FAIL result. The bash -c "env ANSIBLE_DISPLAY_OK_HOSTS=no ansible-playbook -C -D myplaybook.yml | grep -qE 'changed=[1-9].*failed=|changed=.*failed=[1-9]'" line from test-kitchen works well for the PASS/FAIL case, unfortunately not so much for the "These tasks are the changed ones + here are their diffs - fix this!" case I want to see. :-(
Edit4: I ended up using hooks in the following way:
- id: foo-rollout name: All Foo hosts are in the state this repository claims they are entry: bash -c "! ANSIBLE_DISPLAY_OK_HOSTS=no ANSIBLE_DISPLAY_SKIPPED_HOSTS=no ansible-playbook -C -D foo.yml | rg --passthrough '(changed:)|(failed:)|(fatal:)'" language: python additional_dependencies: - ansible==2.10.0 always_run: true pass_filenames: false stages: [manual]The thing to note is the ! right at the start of the bash line, which inverts the error code (so once ripgrep finds something, it is an error instead of a success and vice versa). The stages: [manual] part is because playbooks typically are slow enough that you can't run them in fractions of a second, so this is not really too helpful to run automatically as a pre-commit hook. It is nice to run every once in a while via pre-commit run --all-files --hook-stage manual though.
Top Comment:
I would wonder if this is something that could be done if you built a custom callback plugin.
Or possibly you only need a callback plugin that gives you the output you want in a better format to actually get the information you want.