Almost like that xkcd joke…
Almost like that xkcd joke…
Plex was how last pass got hacked. https://www.howtogeek.com/147554/lastpass-data-breach-shows-why-plex-updates-are-important/
You still need to do stuff even if it is plex.
To back off your post, does anyone have one for Australia?
I don’t think that works on my Samsung TV, or my partners iPad though. :)
Although not especially effective on the YouTube front, it actually increases network security just by blocking api access to ad networks on those kinds of IoT and walled garden devices. Ironically my partner loves it not for YouTube but apparently all her Chinese drama streaming websites. So when we go travel and she’s subjected to those ads she’s much more frustrated than when she’s at home lol.
So the little joke while not strictly true, is pretty true just if you just say ‘streaming content provider’.
Hey so it seems like you don’t really get licensing or ‘too expensive’ is just business speak for wanting it done free.
Exchange plan 1 licenses are minimally very very small licenses, but you can get even cheaper. You can even get exchange kiosk. Kiosk isn’t designed for users, it’s designed for things like an MFP then you’re allowed to relay with an authenticated startTLS account setup on the MFP to connect to exchange Online.
However, if you don’t use an authenticated account, you can still send internally. That way your inevitable compromised device doesn’t spam the world with mail throttle Microsoft servers. However you can scan to your own internal staff. And by internal staff I’m guessing at more and more here but I’m betting you have two mail domains. Only domains in your exchange Online Admin centre which are added into the domains, will be ‘internal’.
If you wanted hybrid you should do hybrid using the hybrid configuration wizard and it will connect your on premises exchange to your exchange Online using mail transports. You need to fix up a bunch of things to get that connected. But doing so will count the mailboxes which are on premise as ‘internal’ and unauthenticated mail will be allowed to relay to them.
But 40 exchange online only accounts with exchange plan 1 is hardly a few seconds of wage time per month in costs.
I’m guessing a lot here, but you said you have two different mail servers currently, online and on premise, I can only assume you’ve either got two different mail domains otherwise MX routing would be dead to one or the other. And I guess that because you said you’re getting errors that only happen when you send mail to external users.
So…
How are they placing this data? Api? Not possible to align disk tiers to api requests per minute? Api response limited to every 1ms for some clients, 0.1ms rate for others?
You’re pretty forthcoming about the problems so I do genuinely hope you get some talking points since this issue affects, app&db design, sales, and maintenance teams minimally. Considering all aspects will give you more chance for the business to realise there’s a problem that affects customer experience.
I think from handling tickets, maybe processes to auto respond to rate limited/throttled customers with 'your instance been rate limited as it has reached the {tier limit} as per your performance tier. This limit is until {rate limit block time expiry}. Support tickets related to performance or limits will be limited to P3 until this rate limit expires."
Work with your sales and contracts team to update the sla to exclude rate limited customers from priority sla.
I guess I’m still on the “maybe there’s more you can do to get your feet out of the fire for customer self inflicted injury” like correctly classifying customer stuff right. It’s bad when one customer can misclassify stuff and harm another customer with an issue by jumping a queue and delaying response to real issues, when it’s working as intended.
If a customer was warned and did it anyway, it can’t be a top priority issue, which is your argument I guess. Customers who need more, but pay for less and then have a expectation for more than they get. It’s really not your fault or problem. But if it’s affecting you I guess I’m wondering how to get it to affect you less.
If it’s possible to do, and it causes a user experience issue, especially one as jarring as “stop accepting writes” you should start adding rate limits and validate inputs with rate limits expressed to the user before they hit the error rate.
To me you should already be sanitising input anyway, and this would just be part of that logic. If a user is trying to upload more than x it warns (with link to documentation of the limit). If user has gone past the rate limits, then error.
I’m not a sre or dev, just a sysadmin though. Users expect guard rails. If it’s possible, it’s permitted.
There have been a few cases where ports are blocked. For example on many residential port 25 is blocked. If you pay and get a static ip this often gets unblocked. Same with port 10443 on a few residential services. There’s probably more but these are issues I’ve seen.
If you think about how trivial these are to bypass, but also that often aligns to fixing the problem for why they’re blocked. Iirc port 10443 was abused by malicious actors when home routers accepted Nat- pnp from say an unpatched qnap. Automatically forwarding inbound traffic on 10443 to the nas which has terrible security flaws and was part of a wide spread botnet. If you changed the Web port, you probably also are maintaining the qnap maybe. Also port 25 can be bypassed by using start-tls authenticated mail on 587 or 465 and therefore aren’t relaying outbound mail spam from infected local computers.
Overall fair enough.
Bring free on cloudflare makes it widely adopted quickly likely.
It’s also going to break all the firewalls at work which will no longer be able to do dns and http filtering based on set categories like phishing, malware, gore, and porn. I wish I didn’t need to block these things, but users can’t be trusted and not everyone is happy seeing porn and gore on their co-workers screens!
The malware and other malicious site blocking though is me. At every turn users will click the google prompted ad sites, just like the keepass one this week.
Anyway all that’s likely to not work now! I guess all that’s left is to break encryption by adding true mitm with installing certificates on everyone’s machines and making it a proxy. Something I was loathe to do.
After I followed the instructions and having 15 years of system administration experience. Which I was willing to help but I guess you’d rather quip.
From my perspective unless there’s something that you’ve not yet disclosed, if wireguard can get to the public domain, like a vps, then tailscale would work. Since it’s mechanically doing the same thing, being wireguard with a gui and a vps hosted by tailscale.
If your ISP however is blocking ports and destinations maybe there are factors in play, usually ones that can be overcome. But your answer is to pay for mechanically the same thing. Which is fine, but I suspect there’s a knowledge gap.
Are you sure? Did you want to troubleshoot this or did you just want to give up?
I’ve got two synology nas connected to each other directly for hyper backup replications at clients because both units are on cgnat isps and there’s no public IP. And it just works.
Didn’t understand that by willing you meant wanting.
Not possible without a domain, even just “something.xyz”.
The way it works is this:
Now, to get that experience you need to meet those conditions. The machine trying to browse to your website needs to trust the certificate that’s presented. So you have a few ways as I previously described.
Note there’s no reverse proxy here. But it’s also not a toggle on a Web server.
So you don’t need a reverse proxy. Reverse proxies allow some cool things but here’s two things they solve that you may need solving:
But in this case you don’t really need to if you have lots of ips since you’re not offering publicly you’re offering over tailscale and both Web servers can be accessed directly.
It’s possible to host a dns server for your domain inside your tailnet, and offer dns responses like: yourwebserver.yourdomain.com = tailnetIP
Then using certbot let’s encrypt with DNS challenge and api for your public dns provider, you can get a trusted certificate and automatically bind it.
Your tailnet users if they use your internal dns server will resolve your hosted service on your private tailnet ip and the bound certificate name will match the host name and everyone is happy.
There’s more than one way though, but that’s how I’d do it. If you don’t own a domain then you’ll need to host your own private certificate authority and install the root authority certificate on each machine if you want them to trust the certificate chain.
If your family can click the “advanced >continue anyway” button then you don’t need to do anything but use a locally generated cert.
It’s totally fine to bulk replace some sensitive things like specifically sensitive information with “replace all” as long as it doesn’t break parsing which happens with inconsistency. Like if you have a server named "Lewis-Hamiltons-Dns-sequence“ maybe bulk rename that so is still clear “customer-1112221-appdata”.
But try to differentiate ‘am I ashamed’ or ‘this is sensitive and leaking it would cause either a PII exfiltration risk or security risk’ since only one of these is legitimate.
Note, if I can find that information with dns lookup, and dns scraping, that’s not sensitive. If you’re my customer and you’re hiding your name, that I already invoice, that’s probably only making me suspicious if those logs are even yours.
Just fyi, as a sysadmin, I never want logs tampered with. I import them filter them and the important parts will be analysed no matter how much filller debugging and info level stuff is there.
Same with network captures. Modified pcaps are worse than garbage.
Just include everything.
Sorry you had a bad experience. The customer service side is kind of unrelated to the technical practice side though.
Luckily on your own network you have control over these decisions! Especially with source and destination firewall rules.
Personally, it’s the power of powershell that I use for the hundreds of windows servers. Otherwise it’s the power of Linux bash shell scripts for the dozens of Linux servers. None of the Linux servers run a gui so there’s no options there. Tbh for me, self documenting gui is the slowest way to do work. Configuring hundreds at once with peer reviewed scripts and change control is much more effective since the peer review and change control will be needed either way.
Oh though I use fortimanager a lot of configuring dozens of Fortigates. Only have a few scripts on it though.
So personally the way I wish I could have this is blur on my sfw account, and not blur it on my nsfw account.
Instead I have to unblur on both, and hide all nsfw on one account.
The irony is, if I just installed a second lemmy client I could have it work the way I want.
Sure was! You need to be on top of paid and free and open source software from a security stand point. There’s no shortcut no matter what you think you’re paying for. Your threat model might be better when the service automates a Web proxy for you, but that’s only one facet. You trade problems but should never feel like you can “set and forget”. Sometimes it’s better for you to do it yourself because there’s no lying about responsibilities that way.