• The full end is simple 4 letters. Fun facts about sex that no one has told you before. Never try to correct input

    17.07.2023

    Hello. My name is Sasha Barannik. At Mail.Ru Group, I manage the web development department, which consists of 15 employees. We have learned how to create sites for tens of millions of users and can easily cope with several million daily audiences. I myself have been doing web development for about 20 years, and for the last 15 years of work, programming has been predominantly in PHP. Although the capabilities of the language and the approach to development have changed a lot during this time, understanding the main vulnerabilities and the ability to protect against them remain key skills for any developer.

    There are many articles and guides on security on the Internet. This book seemed to me quite detailed, yet concise and understandable. I hope it will help you learn something new and make your sites safer and more reliable.

    P.S. The book is long, so the translation will be laid out in several articles. So let's get started...

    Another Book on PHP Security? There are many ways to start a PHP security book. Unfortunately, I didn't read any of them, so I'll have to deal with it in the process of writing. Perhaps I will start with the most basic and hope that everything will work out.

    If we consider an abstract web application launched online by company X, we can assume that it contains a number of components that, if hacked, can cause significant harm. What, for example?

  • Harm to users: Gaining access to email, passwords, personal data, bank card details, business secrets, contact lists, transaction history, and deeply guarded secrets (like someone naming their dog Glitter). Leakage of this data harms users (individuals and companies). Web applications that misuse such data and sites that take advantage of user trust can also cause harm.
  • Damage to Company X itself: Damage to users results in reputational damage, compensation, loss of important business information, additional costs such as infrastructure, security improvements, remediation, legal costs, large benefits for exiting top managers, etc.
  • I'll focus on these two categories because they include most of the annoyances that web application security is supposed to prevent. All companies that have encountered serious security breaches are quick to write in press releases and on websites how they are in awe of it. So I advise you to feel in advance with all your heart the importance of this problem before you encounter it in practice.

    Unfortunately, security issues are often resolved after the fact. It is believed that the most important thing is to create a working application that meets the needs of users, with an acceptable budget and time frame. It's an understandable set of priorities, but you can't ignore security forever. It's much better to keep it in mind at all times by implementing specific decisions during development when the cost of change is low.

    Secondary security is largely the result of programming culture. Some programmers break out in cold sweats at the thought of a vulnerability, while others may dispute the existence of a vulnerability until they can prove it's not a vulnerability at all. Between these two extremes, there are many programmers who will simply shrug their shoulders, because they have not yet gone wrong. It is difficult for them to understand this strange world.

    Since the web application security system must protect users who trust the services of the application, it is necessary to know the answers to the questions:

  • Who wants to attack us?
  • How can they attack us?
  • How can we stop them?
  • Who wants to attack us? The answer to the first question is very simple: everything and everything. Yes, the whole universe wants to teach you a lesson. A kid with an overclocked computer running Kali Linux? He probably already attacked you. A suspicious man who likes to put sticks in wheels? He probably already hired someone to attack you. Trusted REST API through which you receive data hourly? It was probably hacked a month ago to give you infected data. Even I can attack you! So don't blindly believe this book. Consider me lying. And find a programmer who will bring me to clean water and expose my bad advice. On the other hand, maybe he is going to hack you too...

    The point of this paranoia is to make it easier to mentally categorize everything that interacts with your web application (User, Hacker, Database, Untrusted Input, Manager, REST API) , and then assign each category a trust index. Obviously "Hacker" is not to be trusted, but what about "Database"? "Untrusted input" got its name for a reason, but would you really filter a blog post from a colleague's trusted Atom feed?

    Those who are serious about hacking web applications learn to take advantage of this mindset by attacking more often not vulnerable data sources, but trusted data sources that are less likely to have a good system of protection. This is not a random decision: in real life, subjects with a higher trust index arouse less suspicion. It is these data sources that I first of all pay attention to when analyzing an application.

    Let's get back to databases. Assuming a hacker can access the database (and we paranoids always assume), then it can never be trusted. Most applications trust databases without question. From the outside, a web application looks like a single entity, but inside it is a system of separate components that exchange data. If we consider all these components to be trusted, then if one of them is hacked, all the others will quickly be compromised. Such catastrophic security problems cannot be solved with the phrase "If the base is hacked, then we still lost." You can say so, but it is not at all a fact that you will have to do this if you initially do not trust the base and act accordingly!

    How can they attack us? The answer to the second question is quite an extensive list. You can be attacked from anywhere that every component or layer of a web application is getting its data. In essence, web applications simply process data and move it from place to place. User requests, databases, APIs, blog feeds, forms, cookies, repositories, PHP environment variables, config files, config files again, even PHP files you execute - all of which can potentially be infected with security breach data and damage. . In fact, if the malicious data is not explicitly present in the PHP code used for the request, then it is likely to come as a "payload". This assumes that a) you wrote the PHP source code, b) it was properly reviewed, and c) you were not paid by criminal organizations.

    If you use data sources without verifying that the data is completely safe and fit for use, then you are potentially open to attack. You also need to check that the received data matches the data you send. If the data is not made completely safe for withdrawal, then you will also have serious problems. All this can be expressed as a rule for PHP “Validate input; escape the output."

    These are the obvious sources of data that we need to control somehow. Sources can also include client-side repositories. For example, most applications recognize users by assigning them unique session IDs, which can be stored in cookies. If an attacker gets the value from the cookie, then he can impersonate another user. While we can mitigate some of the risks associated with intercepting or tampering with user data, we cannot guarantee the physical security of a user's computer. We can't even guarantee that users will find "123456" the dumbest password after "password". Adding spice is the fact that today cookies are not the only type of user-side storage.

    Another often overlooked risk concerns the integrity of your source code. In PHP, it is becoming increasingly popular to develop applications based on a large number of loosely coupled libraries, modules, and packages for frameworks. Many of them are downloaded from public repositories like Github, installed using package installers like Composer and its web companion Packagist.org. Therefore, the security of the source code depends entirely on the security of all these third-party services and components. If Github is compromised, then most likely it will be used to distribute code with a malicious additive. If Packagist.org - then the attacker will be able to redirect package requests to their own, malicious packages.

    Currently, Composer and Packagist.org are affected by known vulnerabilities in dependency detection and distribution of packages, so always double check everything in your production environment and check the source of all packages with Packagist.org.

    How can we stop them? Breaking the security of a web application can be both ridiculously easy and time consuming. It's fair to assume that every web application has a vulnerability somewhere. The reason is simple: all applications are made by people, and people make mistakes. So perfect security is a pipe dream. All applications can contain vulnerabilities, and the task of programmers is to minimize the risks.

    You will have to think carefully to reduce the likelihood of damage from an attack on a web application. In the course of the story, I will talk about possible methods of attack. Some of them are obvious, others are not. But in any case, to solve the problem, it is necessary to take into account some basic security principles.

    Basic Security Principles When designing security measures, their effectiveness can be evaluated using the following considerations. Some I have already mentioned above.
  • Don't trust anyone or anything.
  • Always assume the worst case scenario.
  • Apply multi-level protection (Defence-in-Depth).
  • Adhere to the principle "the simpler the better" (Keep It Simple Stupid, KISS).
  • Adhere to the principle of "least privilege".
  • Attackers smell ambiguity.
  • Read the documentation (RTFM), but never trust it.
  • If it hasn't been tested, then it doesn't work.
  • It's always your fault!
  • Let's briefly go over all the points.1. Don't trust anyone or anything As mentioned above, the correct attitude is to assume that everyone and everything your web application interacts with wants to hack it. This includes other application components or layers that are needed to process requests. All and all. No exceptions.2. Always Assume the Worst-Case Many security systems have one thing in common: no matter how well they are made, they can all be breached. If you take this into account, you will quickly understand the advantage of the second point. Focusing on the worst-case scenario will help assess the extent and severity of the attack. And if it does happen, then you may be able to reduce the unpleasant consequences through additional protections and changes in the architecture. Perhaps the traditional solution you are using has already been replaced by something better?3. Use Defense-in-Depth Defense-in-Depth is borrowed from military science, because people have long realized that the numerous walls, sandbags, equipment, body armor and flasks that cover vital organs from enemy bullets and blades are the right approach. to safety. You never know which of the above will not protect, and you need to make sure that several levels of protection allow you to rely on more than one field fortification or battle formation. Of course, it's not just about single failures. Imagine an attacker climbing a giant medieval wall using a ladder and discovering that there is another wall behind it, from where they are showered with arrows. Hackers will feel the same way.4. Keep It Simple Stupid (KISS) The best defenses are always simple. They are easy to develop, implement, understand, use and test. Simplicity reduces bugs, encourages correct application performance, and facilitates implementation in even the most complex and hostile environments.5. Adhere to the principle of "least privileges" Each participant in the exchange of information (user, process, program) should have only those access rights that he needs to perform his functions.6. Attackers smell obscurity "Security through obscurity" is based on the assumption that if you use protection A and don't tell anyone what it is, how it works or even exists, then it magically helps you, because attackers are confused. In fact, this only gives a small advantage. Often, a skilled attacker can figure out what you've done, so you need to use explicit defenses as well. Those who are unduly convinced that a vague defense eliminates the need for a real one should be specially punished for the sake of getting rid of illusions.7. Read the documentation (RTFM), but never trust it The PHP manual is the bible. Of course, it was not written by the Flying Spaghetti Monster, so technically it may contain a number of half-truths, omissions, misinterpretations, or errors not yet noticed or corrected. The same goes for Stack Overflow.

    Specialized sources of security wisdom (focused on PHP and beyond) provide more detailed knowledge in general. The closest thing to a PHP security bible is OWASP, with articles, tutorials, and tips. If something is discouraged on OWASP, never do it!

    8. If It Hasn't Been Tested, It Doesn't Work When implementing protections, you must write all the necessary tests to verify that they work. Including pretend that you are a hacker for whom the prison is crying. It may seem far-fetched, but being familiar with web application hacking techniques is good practice; you will learn about possible vulnerabilities, and your paranoia will increase. At the same time, it is not necessary to tell management about the newly acquired gratitude for hacking a web application. Be sure to use automated tools to identify vulnerabilities. They are useful, but of course they do not replace quality code reviews and even manual testing of the application. The more resources you spend on testing, the more reliable your application will be.9. It's always your fault! Programmers are used to believing that security vulnerabilities will be found as scattered attacks, and their consequences are insignificant.

    For example, data breaches (a well-documented and widespread type of hack) are often viewed as small security issues because they don't directly affect users. However, leaking information about software versions, development languages, source code locations, application logic and business logic, database structure, and other aspects of the web application environment and internal operations is often essential to a successful attack.

    At the same time, attacks on security systems are often combinations of attacks. Individually, they are insignificant, but at the same time they sometimes open the way for other attacks. For example, SQL injection sometimes requires a specific username, which can be obtained using a Timing Attack against the administrative interface, instead of a much more expensive and noticeable brute force. In turn, SQL injection allows you to implement an XSS attack on a specific administrative account without drawing attention to a large number of suspicious log entries.

    The danger of looking at vulnerabilities in isolation lies in underestimating their threat and, therefore, in treating them too carelessly. Programmers are often too lazy to fix a vulnerability because they consider it too minor. It is also practiced to shift the responsibility for secure development to end programmers or users, and often without documenting specific problems: even the existence of these vulnerabilities is not recognized.

    The apparent insignificance is not important. It is irresponsible to force programmers or users to fix your vulnerabilities, especially if you did not even inform about them.

    Input validation Input validation is the outer defense perimeter of your web application. It protects the core business logic, data processing, and output generation. In a literal sense, everything outside this perimeter, with the exception of the code executed by the current request, is considered enemy territory. All possible entrances and exits of the perimeter are guarded day and night by belligerent sentries who shoot first and ask questions later. Connected to the perimeter are separately guarded (and very suspicious-looking) "allies", including "Model", "Database" and "File System". No one wants to shoot them, but if they try their luck… bang. Each ally has its own perimeter, which may or may not trust ours.

    Remember my words about who you can trust? Nobody and nothing. In the PHP world, advice is everywhere not to trust "user input". This is one of the categories according to the degree of trust. Assuming that users cannot be trusted, we think that everything else can be trusted. This is wrong. Users are the most obvious unreliable source of input because we don't know them and can't control them.

    Validation Criteria Input validation is both the most obvious and the most unreliable defense of a web application. The vast majority of vulnerabilities are due to failures in the verification system, so it is very important that this part of the protection works correctly. It can fail, but still adhere to the following considerations. Always keep in mind when implementing custom validators and using third party validation libraries that third party solutions tend to perform common tasks and omit key validation routines that your application may need. When using any libraries designed for security needs, be sure to check them yourself for vulnerabilities and correct operation. I also recommend not to forget that PHP can exhibit strange and possibly unsafe behavior. Look at this example, taken from filter functions:

    Filter_var("php://example.org", FILTER_VALIDATE_URL);
    The filter runs without issue. The problem is that the accepted php:// URL can be passed to a PHP function that expects to receive a remote HTTP address, rather than returning data from the executing PHP script (via a PHP handler). The vulnerability occurs because the filter option does not have a method that restricts allowed URIs. Even though the application is expecting an http, https, or mailto link, not some PHP-specific URI. It is necessary by all means to avoid such an overly general approach to verification.

    Be careful with context Input validation should prevent you from entering insecure data into your web application. A major stumbling block: Data security checks are usually only performed for the first intended use.

    Let's say I received data containing a name. I can easily check it for apostrophes, hyphens, brackets, spaces, and a whole host of alphanumeric Unicode characters. The name is valid data that can be used for display (first intended use). But if you use it somewhere else (for example, in a database query), then it will be in a new context. And some of the characters that are allowed in a name will be dangerous in this context: if the name is converted to a string to perform a SQL injection.

    It turns out that input validation is inherently unreliable. It is most effective for truncating unambiguously invalid values. Say when something needs to be an integer, or an alphanumeric string, or an HTTP URL. Such formats and values ​​have their limitations and, if properly checked, are less likely to pose a threat. Other values ​​(unlimited text, GET/POST arrays, and HTML) are more difficult to check and are more likely to contain malicious data.

    Since most of the time our application will be passing data between contexts, we can't just check all the inputs and consider the job done. Checking in is only the first loop of protection, but by no means the only one.

    Along with input data validation, a protection method such as escaping is very often used. With it, the data is checked for security when entering each new context. Usually this method is used to protect against cross-site scripting (XSS), but it is also in demand in many other tasks, as a filtering tool.

    Escaping protects against erroneous interpretation by the receiver of outgoing data. But this is not enough - as data enters a new context, a check is needed specifically for a specific context.

    While this may be seen as a duplication of initial input validation, the additional validation steps are actually better suited to the current context when data requirements are very different. For example, the data coming from a form might contain a percentage. The first time we use it, we check that the value is indeed an integer. But when passing it to our application model, new requirements may arise: the value must fit into a certain range, which is mandatory for the application's business logic to work. And if this additional check is not performed in the new context, then serious problems can arise.

    Use only whitelists, not blacklists Blacklists and whitelists are two primary approaches to input validation. Black means checking for invalid data, and white means checking for valid data. Whitelisting is preferable because only the data we expect is passed during validation. In turn, blacklists take into account only the assumptions of programmers about all possible erroneous data, so it is much easier to get confused, miss something or make a mistake here.

    A good example is any validation procedure designed to make HTML safe from the point of view of unescaped output in a template. If we use a blacklist, then we need to check that the HTML does not contain dangerous elements, attributes, styles, and executable JavaScript. It's a lot of work, and blacklist-based HTML cleaners always manage to overlook dangerous combinations of code. And whitelisting tools eliminate this ambiguity by allowing only known allowed elements and attributes. All others will simply be separated, isolated or removed, no matter what they are.

    So whitelisting is preferable for any verification procedures due to higher security and reliability.

    Never try to correct input data Validation of input data is often accompanied by filtering. If during the check we simply evaluate the correctness of the data (with the issuance of a positive or negative result), then filtering changes the data being checked so that it satisfies specific rules.

    This is usually somewhat harmful. Traditional filters include, for example, removing all characters except numbers from phone numbers (including extra brackets and hyphens), or trimming unnecessary horizontal or vertical space. In such situations, minimal cleanup is performed to eliminate display or transmission errors. However, you can get too carried away with using filtering to block malicious data.

    One consequence of trying to fix the input is that an attacker can predict the impact of your fixes. Let's say there is some invalid string value. You search for it, delete it and complete the filtering. What if an attacker creates a string-separated value to trick your filter?

    alert(document.cookie);
    In this example, simply filtering by tag will do nothing: removing the explicit tag will cause the data to be treated as a fully valid HTML script element. The same can be said about filtering by any specific formats. All this clearly shows why it is impossible to make input data validation the last protective loop of the application.

    Instead of trying to correct inputs, just use a whitelisted validator and reject such input attempts entirely. And where you need to filter, always filter before doing the check, never after.

    Never trust external validators and constantly monitor for vulnerabilities Earlier, I noted that validation is required each time data is transferred to a new context. This also applies to validation performed outside of the web application itself. These controls include validation or other restrictions applied to HTML forms in the browser. Look at this form from HTML 5 (tags omitted):

    Rep. Of Ireland United Kingdom
    HTML forms are able to impose restrictions on the input data. You can limit the selection with a list of fixed items, set minimum and maximum values, and limit the length of the text. The possibilities of HTML 5 are even wider. Browsers can check URLs and mail addresses, control dates, numbers, and ranges (although support for the latter two is rather arbitrary). Browsers are also able to validate input using JavaScript regular expressions included in the template attribute.

    With all this abundance of controls, we must not forget that their purpose is to improve the usability of your application. Any attacker is able to create a form that does not contain the restrictions from your original form. You can even create an HTTP client for automated form filling!

    Another example of external validators is getting data from third party APIs like Twitter. This social network has a good reputation and is usually trusted without question. But since we're paranoid, don't even trust Twitter. If compromised, its responses will contain insecure data, for which we will not be ready. Therefore, even here, apply your own check so as not to be defenseless in case of something.

    Where we have confidence in external means of verification, it is convenient to track vulnerabilities. For example, if an HTML form sets a limit on the maximum length and we receive input data whose size has reached the limit, then it is logical to assume that this user is trying to bypass the validation. In this way, we can register gaps in external tools and take further action against potential attacks by limiting access or the number of requests.

    Avoid type conversions in PHP PHP is not a strongly typed language, and most of its functions and operations are type-safe. This can lead to serious problems. Moreover, it is not the values ​​themselves that are especially vulnerable, but the validators. For example:

    Assert(0 == "0ABC"); //returns TRUE assert(0 == "ABC"); //returns TRUE (even without a leading digit!) assert(0 === "0ABC"); //returns NULL/Gives a warning that the assertion cannot be checked
    When designing validators, make sure you use strict comparison and manual type conversion when input or output values ​​can be a string. For example, forms can return a string, so if you're working with data that needs to be an integer, be sure to check its type:

    Function checkIntegerRange($int, $min, $max) ( if (is_string($int) && !ctype_digit($int)) ( return false; // contains non-digit characters ) if (!is_int((int) $int)) ( return false; // another non-integer value or greater than PHP_MAX_INT ) return ($int >= $min && $int = $min && $int array("verify_peer" => TRUE))); $body = file_get_contents("https://api.example.com/search?q=sphinx", false, $context);
    UPD. In PHP 5.6+, the ssl.verify_peer option is set to TRUE by default.

    The cURL extension includes server validation out of the box, so you don't need to configure anything. However, programmers sometimes take a thoughtless approach to the security of their libraries and applications. This approach can be found in any libraries that your application will depend on.

    Curl_setopt(CURLOPT_SSL_VERIFYPEER, false);
    Disabling server verification in an SSL context or when using curl_setopt() will result in a vulnerability to man-in-the-middle attacks. But it is turned off precisely to solve the problem of annoying errors that indicate an attack or attempts by an application to contact a host whose SSL certificate is incorrectly configured or expired.

    Web applications can often act as a proxy for user actions, such as a Twitter client. And the least we can do is keep our applications up to the high standards set by browsers that warn users and try to protect them from connecting to suspicious servers.

    Conclusions We are often well placed to create a secure application. But we ourselves bypass some reasonable restrictions to facilitate development, debugging, and disable the output of annoying errors. Or, out of good intentions, we are trying to unnecessarily complicate the logic of the application.

    But hackers don't eat their bread in vain either. They are looking for new ways to bypass our imperfect defenses and study vulnerabilities in the modules and libraries we use. And if our task is to create a secure web application, then theirs is to compromise our services and data. Ultimately, we all work to improve our products.

    Do you need to determine the best time to conceive, prevent unwanted pregnancies, or know when is the best time to have sex with your partner? Previously, for this, women had to go to a consultation with their doctor, but now they have a new best friend - a smartphone.

    In recent years, many applications for women have appeared that allow you to easily track your fertile days and ovulation time, as well as make personal notes. In addition, they have many other functions. One such app is Glow, which is already used by 47 million women. Glow lets you track things like women's moods and the quality and frequency of sex. Thanks to this application, it became possible to get these interesting facts about the intimate life of women from all over the world.

    Top countries for women

    1. Do you lack intimacy? Go to Canada. It turns out that Canadians have sex 45% more often than the average app users.

    2. But beware: Canada is a great place to get pregnant. Canadian women can get pregnant 21% easier than others.

    3. Australian women also have frequent sex - 37% more than the average app users.

    4. Needless to say, women in Australia also have a good chance of getting pregnant? They are 14% higher than other users.

    5. USA is a good place to be happy. American women are 16% more likely to have sex than other women.

    6. Worst place to be happy? Latin America. Here women have sex 4% less often than the average app users.

    sexual appetites

    1. A woman's sexual appetite corresponds to her monthly cycle. The first day of the cycle is considered the first day of menstruation, which lasts approximately five days. Thus, women are the least interested in sex between one and five days a month.

    2. Many women report a change in energy levels or mood during this time, and this is usually associated with a decrease in sex drive. Also, women are less interested in sex for a whole week after menstruation.

    3. Most women start having sex again on the 12th day of their cycle.

    4. Many women have regular sexual relations from the 12th to the 14th day of the cycle. The Glow app calls these days "peak sexuality."

    5. In fact, women feel most sexual on the 13th and 14th day of their cycle. But here's what's interesting: they don't necessarily get the best and most satisfying sex at this time.

    6. Most of all, women enjoy sex on the last, 30th day of their cycle. This day in the Glow is labeled "peak orgasms".

    Are women satisfied?

    1. Women feel happiest on days 15 and 16 of their cycle, and also when they have had a lot of sex on previous days.

    2. Glow users recorded 7.6 million sexual contacts in two years.

    3. This means that every minute at least seven women who use the Glow app have sex.

    4. By the way, users also reported being in love 2 million times. The app also tracks the sex cycles and fertility of 88,000 couples.

    5. Unfortunately, despite the existing sexual contacts, not all women are satisfied with them. Almost a third of women are ready to give up sex rather than a smartphone.

    6. But that still means that two-thirds would rather give up phones than sex.

    Similar articles