CHAPTER-1.Design the application architecture
Objective 1.1: Plan the application layers
An application is simply a set of functionality: a screen or set of screens that displays information, a way to persist data across uses, and a way to make business decisions.
A layer is a logical grouping of code that works together as a common concern. Layers work together to produce the completed application.
One of the essential parts of an ASP.NET MVC application is the architectural design of the Model-View-Controller (MVC) pattern. It is based on providing separation between the appearance of the application and the business logic within the application. The model is designed to manage the business logic, the view is what the user sees, and the controller manages the interaction between the two.
Planning data access
A key reason for using ASP.NET MVC to meet your web-based business needs is how it connects users to data.
Data access options
- Using an object relational mapper (O/RM)
An O/RM is an application or system that aids in the conversion of data within a relational database management system(RDBMS) and the object model that is necessary for use within object-oriented programming. - Writing your own component to manage interactions with the database
This approach might be preferred when you are working with a data model that does not closely model your object model, or you are using a database format that is not purely relational, such as NoSQL.
Design approaches
You must also consider how you will manage state. If you want to use sessions across multiple servers, you likely need to use Microsoft SQL Server because Microsoft Internet Information Services (IIS) supports it by default. If you plan to maintain state on your own, it needs to become part of your data management design.
Data access from within code
The primary data access pattern in C# is the Repository pattern
, which is intended to create an abstraction layer between the data access layer and the business logic layer. This abstraction helps you handle any changes in either the business logic or the data access layer by breaking the dependencies between the two. What the repository does internally is separate from the business logic layer.
The Repository pattern is also highly useful in unit testing because it enables you to substitute the actual data connection with a mocked repository that provides well-known data. Another term that can describe the repository is persistence layer
. The persistence layer deals with persisting (storing and retrieving) data from a datastore, just like the repository.
Planning for separation of concern (SoC)
Separation of concern (SoC)
is a software development concept that separates a computer program into different sections, or concerns, in which each concern has a different purpose. By separating these sections, each can encapsulate information that can be developed and updated independently.
A term closely associated with SoC is loose coupling. Loose coupling is an architectural approach in which the designer seeks to limit the amount of interdependencies between various parts of a system.
Using models, views, and controllers appropriately
Model
The model is the part of the application that handles business logic. A model object manages data access and performs the business logic on the data. Unlike other roles in an MVC application, the model does not implement any particular interface or derive from a certain base class.
Model classes are traditionally placed in the Models folder. It is also common, however, to store the models in a separate assembly. Storing the models in a separate assembly makes model sharing easier because multiple applications can use the same set of models. It also provides other incremental improvements, such as enabling you to separate model unit tests from controller unit tests as well as reducing project complexity.
Controllers
Controllers are the part of ASP.NET MVC 4 that handles incoming requests, handles user input and interaction, and executes application logic.
A controller is based on the ControllerBase class and is responsible for locating the appropriate action method to call, validating that the action method can be called, getting values in the model to use as parameters, managing all errors, and calling the view engine to write the page. It is the primary handler of the interaction from the user.
Action methods are typically one-to-one mappings to user interactions. Each user interaction creates and calls a uniform resource locator (URL).
A better approach when laying out the controller structure is to have a controller for each type of object with which the user will be interacting on
the screen. This enables you to compartmentalize the functionality around the object into a single place, making code management simpler and providing more easily understandable URLs.
The best time to conceptualize your controller structure is when you are building your data model for the application.
ACTIONS AND ACTION RESULTS
An action result is any kind of outcome from an action. Although an action traditionally returns a view or partial view, it can also return JavaScript Object Notation (JSON) results or binary data, or redirect to another action, among other things.
Action names are also important. Because the name is part of the URL request, it should be short and descriptive. Do not be so descriptive that you provide too much of the business process in the name, which can result in security issues.
ROUTES AND ROUTING
The routing table is stored in the Global.asax
file. The routing system enables you to define URL mapping routes and then handle the mapping to the right controller and actions. It also helps construct outgoing URLs used to call back to the controller/actions.
ASP.NET provides some default routing. The default routing format is {controller}/{action}/{id}
. That means an HTTP request to http://myurl/Product/Detail/1 will look for the Detail action on the ProductController that accepts an integer as a parameter. The routing engine doesn’t know anything about ASP.NET MVC; its only job is to analyze URLs and pass control to the route handler. The route handler is there to find an HTTP handler, or an object implementing the IHttpHandler
interface, for a request. MvcHandler
, the default handler that comes with ASP.NET MVC, extracts the controller information by comparing the request with the template values in the routing table. The handler extracts the string and sends it to a controller factory that returns the appropriate controller.
ASYNCHRONOUS CONTROLLERS
ASP.NET MVC 4 brings the concept of asynchronous controllers into the default controller class.
You should strongly consider asynchronous methods when the operation is network-bound or I/O-bound rather than CPU-bound. Also, asynchronous methods make sense when you want to enable the user to cancel a long-running method.
The key to using the new asynchronous framework is the Task
framework in the System.Threading.Tasks
namespace.
LISTING 1-1 Calling an external data feed
public async Task<ActionResult> List()
{
ViewBag.SyncOrAsync = "Asynchronous";
string results = string.Empty;
using (HttpClient httpClient = new HttpClient()
{
var response = await httpClient.GetAsync(new Uri("http://externalfeedsite"));
Byte[] downloadedBytes = await response.Content.ReadAsByteArrayAsync();
Encoding encoding = new ASCIIEncoding();
results = encoding.GetString(downloadedBytes);
}
return PartialView("partialViewName", results);
Views
THE RAZOR VIEW AND WEB FORMS VIEW ENGINES
TABLE 1-1 Comparisons between Razor and Web Forms syntax
Code expression | Razor | Web Forms |
---|---|---|
Implicit | <span>@article.Title</span> |
<span><%: article.Title %></span> |
Explicit | <span>Title@(article.Title)</span> |
<span>Title<%: article.Title %></span> |
EXTENDING THE VIEW ENGINES
Both the Web Forms and the Razor view engines are derived from the BuildManagerViewEngine
class, which is derived from the VirtualPathProviderViewEngine
class.
Choosing between client-side and server-side processing
A best practice is to put validation on both sides—on the client side to provide a responsive UI and lower the network cost, and on the server side to act as a gateway to ensure that the input data is valid.
Designing for scalability
Horizontal scaling
With horizontal scaling, you scale by adding additional nodes to the system. This is a web farm scenario, in which a number of commodity-level systems can be added or removed as
demand fluctuates.
Horizontal scaling factors:
- Network hardware that will be deployed and how it handles sessions.
- How multiple servers will affect server caching of information, such as whether to cache rendered HTML that was sent to the client or cache data from a database.
- If your application will provide file management, consider where those files will be stored to ensure access across multiple servers.
Scaling horizontally adds some architectural considerations, but it is a low-cost and effective way to scale, especially because the cost for commodity servers continues to drop. Keep in mind that commodity servers are not necessarily physical servers, but can be virtual machines.
Vertical scaling
With vertical scaling, you scale by adding resources to a single system. This typically involves adding CPUs or memory. It can also refer to accessing more of the existing resources on the system.
You also need to consider database scalability when determining your data access methods.
Objective 1.2: Design a distributed application
Integrating web services
LISTING 1-5 Using the HttpService class to get output from a REST URL
private HttpService _httpService;
public ArticleRepository()
{
_httpService = new HttpService();
}
public IQueryable<Article> GetArticle s()
{
Uri host = new Uri("http://www.yourdomain.com");
string path = "your/rest/path";
Dictionary<string, string> parameters = new Dictionary<string, string>();
NetworkCredential credential = new NetworkCredential("username",
"password");
XDocument xml = _httpService.Get(host, path, parameters, credential);
return ConvertArticleXmlToList(xml).AsQueryable();
}
private List<Article> ConvertArticleXmlToList(XDocument xml)
{
List<Article> article = new List<Article>();
var query = xml.Descendants("Article")
.Select(node =>
node.ToString(SaveOptions.DisableFormatting));
foreach (var articleXml in query)
{
article.Add(ObjectSerializer.DeserializeObject<Article>(articleXml));
}
return article;
}
Designing a hybrid application
A hybrid application is an application hosted in multiple places.
Two primary hybrid patterns:
- Client-centric
The client application determines where the application needs to make its service calls. This pattern is generally the easiest to code, but it is also most likely to fail. Applications built with this approach are the most fragile because any change to either server or client might require a change to the other part. - System-centric
A more service-oriented architecture (SOA) approach. It ideally includes a service bus, such as Windows AppFabric, which will distribute service requests as appropriate hether
it is to a service in the cloud, on-premise, or at another source completely such as a partner or provider site.
Scalability, latency, cost, robustness, and security are considerations as you evaluate a hybrid solution.
Planning for session management in a distributed environment
A session is stored on the server and is unique for a user’s set of transactions. The browser needs to pass back a unique identifier, called SessionId
, which can be sent as part of a small cookie or added onto the query string where it can be accessed by the default handler.
You can approach sessions in ASP.NET MVC 4 in two different ways. The first is to use session to store small pieces of data. The other is to be completely stateless and not use session at all.
There are three modes of session management available in Microsoft Internet Information Services (IIS): InProc, StateServer, and SQLServer.
Planning web farms
Web farms are groups of servers that share the load of handling web requests.
There are many advantages of using a web farm:
- Availability
If a server in the farm goes down, the load balancer redirects all incoming requests to other servers. - Performance
A web farm also improves performance by reducing the load each server handles, thus decreasing contention problems. - Scalability
The ability to add in servers to the farm also provides better scalability.
Objective 1.3: Design and implement the Windows Azure role life cycle
Understanding Windows Azure and roles
Windows Azure provides both platform as a service (PaaS
) and infrastructure as a service (IaaS
) services.
With PaaS
, cloud providers deliver a computing platform, typically including an operating system, a programming language execution environment, a database, and a web server.
IaaS
offers virtual machines.
There are three different types of solutions available in Windows Azure:
- Virtual Machines
- Web Sites
It is a good solution for hosting and running your ASP.NET MVC 4 applications without the overhead of maintaining a full virtual machine. - Cloud Services
It is a strictly PaaS approach, was the initial deployment model for Windows Azure.
Identifying startup tasks
Windows Azure startup tasks are used to perform actions before a role starts.
There are three types of roles in Windows Azure:
- Web(Startup tasks available)
Scenario: Run IIS. - Worker(Startup tasks available)
Scenario: Run middle-tier applications without IIS. - VM(Startup tasks not available)
Scenario: Complete access to the VM instances.
Startup tasks examples: register COM components, install a component, or set registry keys.
Startup tasks are defined in the Task element, which is a node in the Startup element of the ServiceDefinition.csdef
file. A typical startup task is a console application or a batch file that can start one or more Windows PowerShell scripts.
Startup tasks have to end with an error level of zero (0) for the startup process to complete. When startup tasks end with a non-zero error level, the role does not start.
The AppCmd.exe
command-line tool is used in Windows Azure to manage IIS settings at startup.
A startup task can be run more than once, and misconfiguring AppCmd.exe
commands can result in runtime errors.
For example, a common error is to add a Web.config section in the startup task. When the task is run again, it throws an error because the section already exists after the initial run. Managing this kind of situation requires that your application monitor both its internal and external statuses.
Marking a task as background prevents Windows Azure from waiting until the task completes before it puts the role into a Ready state and creates the website.
example:
<Startup>
<Task commandLine="Startup\ExecWithRetries.exe
"/c:Startup\AzureEnableWarmup.cmd"
/d:5000 /r:20 /rd:5000 >> c:\enablewarmup.cmd.log
2>>&1"
executionContext="elevated" taskType="background" />
</Startup>
Remember that the names of websites and application pools are not generally known in advance. The application pool is usually named with a globally unique
identifier (GUID), and your website is typically named rolename_roleinstance number
, ensuring that each website name is different for each version of the role.
Identifying and implementing Start, Run, and Stop events
OnStart
method VS a startup
task
A startup task runs in a different process, which enables it to be at a different level of privilege than the primary point of entry. This is useful when you need to install software or perform another task that requires a different privilege level.
State can be shared between the OnStart method and the Run method because they both are in the same application domain (AppDomain).
A startup task can be configured as either a background or foreground task that runs parallel with the role.
After all the configured startup tasks are completed, the Windows Azure role begins the process of running.
There are three major events you can override: OnStart
, Run
, and OnStop
.
Flow of Windows Azure processing:
- Provision Hardware
- Run Startup Tasks
- OnStart()
- Run()
- OnStop()
Override OnStart() example
public class WorkerRole : RoleEntryPoint
{
public override bool OnStart()
{
try
{
// Add initialization code here
}
catch (Exception e)
{
Trace.WriteLine("Exception during OnStart: " + e.ToString());
// Take other action as needed.
}
return base.OnStart();
}
}
When the OnStart
method is called, Windows Azure sets the role status to Busy. When the role is Busy, it is ignored by any external processes.
If OnStart
returns true
, Windows Azure assumes the OnStart
method was successful and allows the role to run. When OnStart
returns false
, Windows Azure assumes a problem occurred and immediately stops the role instance.
The Application_Start
method is called after the OnStart
method. A Web role can include initialization code in Application_Start
method.
Run
method is equivalent to the Main
method in that it starts the actual application. You do not typically need to override the Run
method. If you do, make sure your code will indefinitely block because a return from the Run
method means the application has stopped running and that the process should continue through to shutdown.
Because the Run
method is void, your override of the Run
method can run in parallel with the default Run
method if desired. You might want to do this if you want to have background tasks running throughout the life of your application.
Override Run() example
public override void Run()
{
try
{
Trace.WriteLine("WorkerRole entrypoint called", "Information");
while (true)
{
Thread.Sleep(10000);
Trace.WriteLine("Working", "Information");
}
// Add code here that runs in the role instance
}
catch (Exception e)
{
Trace.WriteLine("Exception during Run: " + e.ToString());
// Take other action as needed.
}
}
Override OnStop() example
public override void OnStop()
{
try
{
// Add code here that runs when the role instance is to be stopped
}
catch (Exception e)
{
Trace.WriteLine("Exception during OnStop: " + e.ToString());
// Take other action as needed.
}
}
When you override the OnStop method, remember the hard limit of five minutes that Windows Azure puts on all non-user-initiated shutdowns. Because of the hard stop, you need to make sure that either your code can finish within that period or that it will not be affected if it does not run to completion.
Objective 1.4: Configure state management
HTTP is a stateless protocol.
By not having to keep an open connection to a requestor or not having to remember anything about a user’s last connection, a web server can handle many more concurrent users.
The stateless nature of HTTP enables a server to support a connection only until it handles a request and sends a response.
Choosing a state management mechanism
Web Forms use ViewState
as a main way to manage state.
The ViewState
is a construct that gathers pertinent information about the controls on a page and stores them on the page in a hidden form field.
In an ASP.NET MVC 4 application, state information can be stored in the following locations:
- Cache
A memory pool stored on the server and shared across users. - Session
Stored on the server and unique for each user. - Cookies
Stored on the client and passed with each HTTP request to the server. - QueryString
Passed as part of the complete URL string. - Context.Items
Part of theHttpContext
and lasts only the lifetime of that request. - Profile
Stored in a database and maintains information across multiple sessions.
The Cache
object provides a broader scope than the other state management objects as the data is available to all classes within the ASP.NET application. The Cache object enables you to store key-value pairs that become accessible by any user or page in that application domain.
If you consider using Cache in a web farm setting, you need to ensure that your server has its own copy of the cache. You cannot assume that a value is cached simply because the value was used as part of the last request; the request might be connecting to a different server that never called the value in the first place.
Inheriting the SessionStateStoreProviderBase
class enables you to create your own session
provider to support situations in which the default session store is inadequate.
There is no built-in support for managing state that is shared by multiple servers on Oracle database,you need to write a custom provider.
Cookies
are small snippets of information stored on the client side and can persist across sessions. They are individualized to a particular domain or subdomain, so with careful planning you can use cookies across a web farm.
Cookie information is sent to the server and returned from the server with every request.
The sizing can have an impact and it is always part of the HTTP request. A cookie is available in HttpContext.Request.Cookies
when reading and HttpContext.Response.Cookies
when storing the value. A cookie can also be set with an expiration date so that the data stored in the cookies can have a limited time span.
A query string
is information that can be used by only one user. Its lifetime is by request unless architected to be managed differently. You can access the data in the HttpContext.Request.QueryString["attributeName"]
on the server and from the client side by parsing window.location.href
. It is not encrypted over HTTPS, ASP.NET MVC supports several encryption schemas that enable you to encrypt data as necessary for inclusion into the query string.
Context.Items
contains information that is available only during a single request.
If the state information is for display purposes only, you can maintain the information on both the client and server. Caching state information on the client eliminates the need to send it back as part of the rendered HTML with every call and increases performance. Keeping it on the server side enables you to use ASP.NET MVC to work with the data; however, you have to make the state part of the HTTP response, and you have to perform initial server requests on all the state changes.
Factors that should be considered about state information:
- Where the state will be useed: client, server or both
- How to store state
- The size of the information to be maintained
Cookies and query strings are for a few snippets of information. Session is the most commonly used method for storing information between requests. Database is good for amount of data.
Planning for scalability
First, you need to understand what kind of state information you will need to maintain. Maintaining only a few pieces of information is quite different from maintain hundreds of pieces of information. Each of these needs indicates a different solution.
You can use an OutProc
, a StateServer
, or a SQLServer session
or a sessionless
solution.
Out-of-Process Session State (In State Server) in ASP.NET
Using cookies or local storage to maintain state
Cookies
Cookies are the predecessor to the Web Storage API. Cookies are limited in size to 4 kilobytes (KB).
Any site information you might need persisted on the client side, such as login credentials when the user selects Remember Me, will have to be saved as a cookie.
HTML5 Web Storage
The purpose of the Web Storage API is to keep easily retrievable JavaScript objects in the browser memory for use on client-side operations. HTML5 Web Storage API is concerned only with maintaining state information on the client. If you want state information to be used server-side, you have to write the code to send it back as needed.
HTML5 Web Storage can choose to use either the sessionStorage
or localStorage
object.
The sessionStorage
scope enables you to use set and get calls on different pages as long as the pages are from the same origin URL. Objects in sessionStorage persist as long as the browser window (or tab) is not closed.
The localStorage
provides another option that increases scope because localStorage’s values persist beyond window and browser lifetimes, and values are shared across every window or tab communicating with the same origin URL.
Check browser compatibility
If you perform the check on the server, such as by using System.Web.HttpBrowserCapabilities browser = Request.Browser
, you can send back a different view based on the browser version. You could have one view based on HTML5 and the other not using HTML5, and send the appropriate one back to the client.
Check for localStorage in JavaScript example
if(window.localStorage){window.localStorage.SetItem('keyName','valueToUse');}
or
window.localStorage.keyName = 'valueToUse';
This code sets an event listener:
window.AddEventListener('storage', displayStorageEvent, true);
The eventListener fires when there is any change in storage, either localStorage
or sessionStorage
.
Applying configuration settings in the Web.config file
Sessions can be enabled in the Web.config
file through the use of a <sessionState>
node.
<system.web>
<sessionState mode="InProc" cookieless="false" timeout="20"
sqlConnectionString="data
source=127.0.0.1;Trusted_Connection=yes"
stateConnectionString="tcpip=127.0.0.1:42424"
/>
</system.web>
A StateServer configuration for configuring sessionState is as follows:
<system.web>
<sessionState mode="StateServer"
stateConnectionString="192.168.1.103:42424" />
</system.web>
Configuration items can also be added at a lower part of the configuration stack including the Machine.config
file, which is the lowest configuration file in the stack and applies to all websites on that server.
ASP.NET Configuration Overview
Implementing sessionless state
Sessionless state is a way to maintain state without supporting any session modes.
Determining when to use sessionless state in your ASP.NET MVC application requires a deeper look into the mechanics of how sessions interact with the controller.
If you determine that your application will be best served by sessionless state, you need to determine how you will pass the unique identifier from request to request. Mechanisms available in ASP.NET MVC 4:
- Create the identifier on the server the first time the user visits the site and continue to pass this information from request to request.
- Use a hidden form field to store and pass the information from one request to the next. There is some risk in this because a careless developer could forget to add the value, and you will lose your ability to maintain state.
- Because the Razor view engine supports the concept of a layout or master page, you can script the unique identifier storage in that area so that it will render on every page.
- Add JavaScript functionality to store the unique identifier on the client side in a sessionStorage or localStorage and make sure that it is sent back to the server when needed. That way, you don’t have to worry about losing the information; you just need to make sure that you include it when necessary.
- Add the unique identifier to the query string so that it is always available whenever a Request object available.
- Add the unique identifier to the URL and ensure that your routing table has the value mapped accordingly.
Many decisions you make about what to put in session or the state model is based on whether you’ll use the information in the next request.
Objective 1.5: Design a caching strategy
Caching is a mechanism for storing frequently used information and within high-speed memory. This seemingly small change will reduce access time and increase response time.
Implement page output caching (performance oriented)
The web browser can cache any HTTP GET request for a predefined period, which means the next time that user requests the same URL during that predefined period, the browser does not call the server but instead loads the page from the local browser cache.
ASP.NET MVC enables you to set the predefined period by using an action filter:
[OutputCache(Duration=120, VaryByParam="Name", Location="ServerAndClient")]
Public ActionResult Index()
{
Return View("Index",myData);
}
This code sets the response headers so the browser will know to go to its local cache for the next 120 seconds. The Duration
setting represents the time, in seconds, that the page output should be cached. The Location
qualifier gives direction to where caching takes place. Due to the Location
setting in the attribute, any other browser call going to this URL will also get this same server-cached output. VaryByParam
stores a different version of the output based on a different parameter collection that was sent in for the action call. NoStore
is used when caching should be switched off. The default value is Any
, but Client
, Downstream
, Server
, and ServerAndClient
are other options available when setting the cache location.
The OutputCache attribute works for caching an entire page.
Donut caching
Donut caching is a server-side technology that caches an entire page other than the pieces of dynamic content—the donut holes.
You can use the Substitution APIs through the HttpResponse.WriteSubstitution
method by creating an MVC helper.
Donut hole caching
Donut hole caching is well supported in ASP.NET MVC by using child actions.
Example
[ChildActionOnly]
[OutputCache(Duration=60)]
public ActionResult ProductsChildAction()
{
// Fetch products from the database and
// pass it to the child view via its ViewBag
ViewBag.Products = Model.GetProducts();
return View();
}
You need to put the reference into the parent view using the Razor command @Html.Action("ProductsChildAction")
.
Setting OutputCache at a controller level automatically configures all actions that accept a GET
request to use the same caching settings as if the attribute were put on the individual actions.
Distribution caching
Distribution caching and is the most complex of all caching techniques. A solution for this is Windows Server AppFabric, It includes AppFabric Caching Services, which increases responsiveness to frequently used information, including session data.
The main component of AppFabric Caching Services is a cache client that communicates with a cluster of cache servers. Each cache server your application communicates with runs an instance of AppFabric Caching Services, and each maintains a portion of the cached data. AppFabric Caching Services also provides software that can enable each client to keep its own local cache.
When an application needs some information, it initially calls its own local store. If the information is not there, the client asks the cache cluster. If the cache cluster does not have the information, the application must go to the original data source and request the information. All the information in the various caches, local and cluster, is stored under a unique name.
The item being cached in AppFabric Caching Services can be any serialized .NET object. It is also controlled by the client application. The cached version of the object can be deleted or updated as the application requires.
One particular benefit of using AppFabric is that the service enables session maintenance.
Implement data caching
The default implementation of .NET 4 Caching Framework uses the ObjectCache
and MemoryCache
objects that are within the System.Runtime.Caching
assembly.
You can set an expiration period when you create a cache, and it is used by all users on the server.
Generally, you create a CacheProvider
class that implements the ICacheProvider
interface, used as an intermediate layer between the business layer and the data access layer.
Data caching is an important form of caching that can decrease the load on your database and increase application responsiveness. Static queries, in which the data is unlikely to change often, are excellent candidates for implementing data caching. Best practices in ASP.NET MVC 4 would put the calls to the caching service in the model because the model contains the primary business logic. Introducing a caching layer on top of the persistence layer, for example, can improve performance if your application requeries the same data.
Caching in .NET Framework Applications
Implement application caching
The HTML5 specification defines an Application Cache API (AppCache) to give developers access to the local browser cache. To enable the application cache in an application, you must create the application cache manifest, reference the manifest, and transfer the manifest to the client.
Create the application cache manifest
CACHE MANIFEST Example
# Cached entries.
CACHE:
/favicon.ico
default.aspx
site.css
images/logo.jpg
scripts/application.js
# Resources that are "always" fetched from the server.
NETWORK:
login.asmx
FALLBACK:
button.png offline-button.png
The CACHE represents the resources that should be cached on the client, NETWORK defines those items that are never cached, and FALLBACK defines the resources that should be returned if the corresponding resources are not found.
Reference the manifest
You reference the manifest by defining the manifest attribute on the <html>
tag from within the Layout.cshtml
or Master.Page
file: <html manifest="site.manifest">
Transfer the manifest
The main thing about transferring the manifest is to set the correct MIME-type, which is text/cache-manifest
. If you are doing this through code, use Response.ContentType="text/cache-manifest"
.
Without this MIME-type specified, the browser won’t recognize or be able to use the file.
When the application cache is enabled for the application, the browser will fetch resource information in only three cases:
- When the user clears the cache
- When there is any change in the manifest file
- When the cache is updated programmatically via JavaScript
Implement HTTP caching
The HTTP protocol includes a set of elements that are designed to help caching, and it has multiple rules around calculating expiration.
With HTTP caching, everything happens automatically as part of the request stack and there is little programmatic impact.
Objective 1.6: Design and implement a WebSocket strategy
HTML5 WebSockets is a TCP-based protocol that enables two-way communication to occur over a single connection. The server and client can communicate at the same time, as in chatting or instant messaging clients. It also limits connection creation and disposal so that it occurs only once rather than with every message.
Reading and writing string and binary data
HTTP polling
is an ongoing conversation between a client and server in which the client appears to have a constant connection with the server based on a series of standard AJAX requests. The browser creates a new request immediately after the previous response is received. This is a fault-tolerant solution, but it is very intensive about bandwidth and server usage.
HTTP long polling
is a server-side technique in which the client makes an AJAX request to the server to retrieve data. The server keeps the request open until it has data to return. Instead of immediately returning a response, the server blocks the incoming request until the data comes up or the connection times out. It is not a totally reliable solution. Broken connections are common. Upon timeout or data return, the client can immediately open a new connection.
WebSockets
acts as a replacement for HTTP in that it takes over the communications protocol between the client and the server for a particular connection. You should not use it as the primary means of communication between a client and server. Instead, use WebSockets to support some discrete functionality that needs two-way, long-running communication without having to support the request-response process. You will find that WebSockets work best when supporting a part of your page you designed as a partial page or are when using some kind of donut or donut hole caching.
System.Web.HttpBrowserCapabilities
enables you to query a browser’s version to determine whether it supports HTML5.
A WebSocket-based communication generally involves three steps:
- Establishing the connection between both sides with a hand shake
- Requesting that WebSocket server start to listen for communication
- Transferring data
When a WebSocket is requested, the browser first opens an HTTP connection to the server, the browser then sends an upgrade request to convert to a WebSocket. If the upgrade is accepted and processed, and the handshake is completed, all communication occurs over a single TCP socket.
LISTING 1-6 Example of a WebSocket handshake upgrade request and upgrade response
WebSocket handshake upgrade request
GET /mychat HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: hy6T&Ui8trDRGY5REWe4r5==
Sec-WebSocket-Protocol: chat
Sec-WebSocket-Version: 13
Origin: http://example.com
WebSocket handshake upgrade response
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: Ju6Tr4Ewed0p9Uyt6jNbgFD5t6=
Sec-WebSocket-Protocol: chat
LISTING 1-7 jQuery code for a client-side WebSocket connection
var socket;
$(document).ready(function () {
socket = new WebSocket("ws://localhost:1046/socket/handle");
socket.addEventListener("open", function (evnt) {
$("#display").append('connection');}, false);
socket.addEventListener("message", function (evnt) {
$("#display ").append(evnt.data);}, false);
socket.addEventListener("error", function (evnt) {
$("#display ").append('unexpected error.');}, false);
...
});
Or using straight method calls:
function connect(){
try{
var socket;
var host = "ws://localhost:8000/socket/server/start";
var socket = new WebSocket(host);
message('<p class="event">Socket Status: '+socket.readyState);
socket.onopen = function(){
message('<p class="event">Socket Status: '+socket.readyState+' (open)');
}
socket.onmessage = function(msg){
message('<p class="message">Received: '+msg.data);
}
socket.onclose = function(){
message('<p class="event">Socket Status: '+socket.readyState+' (Closed)');
}
} catch(exception){
message('<p>Error'+exception);
}
}
ASP.NET 4.5 enables developers to manage asynchronous reading and writing of data, both binary and string, through a managed API by using a WebSockets object.
You must implement the process of accepting the upgrade request on an HTTP GET and upgrading it to a WebSockets connection: HttpContext.Current.AcceptWebSocketRequest(Func<AspNetWebSocketContext, Task>)
You need to use a delegate when implementing this acceptance because ASP.NET backs up the request that is part of the current context before it calls the delegate. After a successful handshake between your ASP.NET MVC application and the client browser, the delegate you created will be called, and your ASP.NET MVC 4 application with WebSockets support will start.
LISTING 1-8 C# code for managing a WebSockets connection
public async Task MyWebSocket(AspNetWebSocketContext context)
{
while (true)
{
ArraySegment<byte> arraySegment = new ArraySegment<byte>(new byte[1024]);
// open the result. This is waiting asynchronously
WebSocketReceiveResult socketResult =
await context.WebSocket.ReceiveAsync(arraySegment,
CancellationToken.None);
// return the message to the client if the socket is still open
if (context.WebSocket.State == WebSocketState.Open)
{
string message = Encoding.UTF8.GetString(arraySegment.Array, 0,
socketResult.Count);
userMessage = "Your message: " + message + " at " +
DateTime.Now.ToString();
arraySegment = new
ArraySegment<byte>(Encoding.UTF8.GetBytes(message));
// Asynchronously send a message to the client
await context.WebSocket.SendAsync(arraySegment,
WebSocketMessageType.Text,
true, CancellationToken.None);
}
else { break; }
}
}
Choosing a connection loss strategy
When using WebSockets, you need to determine how you are going to handle those times when you lose a connection.
When the connection is broken, the client might notice it when either an onclose
or an onerror
event is thrown, or the delegated methods are called, depending on how the connection was set up. However, it is also possible that the connection might be broken and the connection does not throw an onerror
or onclose
. To manage that, you need to ensure that your application can manage a connection that is no longer available.
The entire premise of WebSockets is that there is a long-open socket connection for communications between the two ends. WebSockets can run into several types of connection issues, You need to keep data protection and communications reset in mind.
"Fire and forget" methodology might not be sufficient for WebSockets. You should architect a system that sends a message; waits for a response; and from the response, or lack thereof, determines whether the system has successfully sent; monitor the connection by time.
If a break occurs, you should reopen the connection and resend the data. Keep in mind that the connection might have been broken after the data was received but before the sender was given the receipt; your code needs to allow for multiple receipts of information. You need to make sure that the onclose
and onerror
events are managed and that you build in a recovery mechanism.
Deciding when to use WebSockets
WebSockets are an ideal solution when you need two-way communication with the server with minimal overhead.
A common use of WebSockets is for an in-browser instant messaging client. A traditional dashboard solution is also a candidate for the flexibility offered by WebSockets because near-real-time updates is a value-add.
The more traditional approach of a client timer might be a better solution in some situations.
Tnable the controller on the server to decide whether to support WebSockets is a optional strategy.
WebSockets do not have HTTP headers, yet they travel as if they are HTTP requests. This is a potential problem because many networks direct traffic by looking at the HTTP headers and determine how to handle messages based on values within the headers, such as CONTENT-TYPE
. The presence of antivirus and firewall software on the client machine could have the same problem because they analyze incoming packets to determine their source and potential risk.
Objective 1.7: Design hTTP modules and handlers
HTTP modules and handlers enable an ASP.NET MVC 4 developer to interact directly with HTTP requests as they are both active participants in the request pipeline. When a request starts into the pipeline, it gets processed by multiple HTTP modules, such as the session and authentication modules, and then processed by a single HTTP handler before flowing back through the request stack to again be processed by the modules.
Implementing synchronous and asynchronous modules and handlers
Modules are called before and after the handler executes.
Creating an HTTP module requires you to implement System.Web.IHttpModule
, which has two methods: void Init(HttpApplication)
and void Dispose
.
The <httpModule>
configuration section in the Web.config
file is responsible for configuring the HTTP module within an application.
Several tasks are performed by the HttpApplication
class while the request is being processed.
The general application flow:
- validation
Validation occurs when the system examines the information sent by the browser to evaluate whether it contains markup that could be malicious. - URL mapping
If any URLs have been configured in the<UrlMappingsSection>
section of theWeb.config
file. - A set of events
TheHttpApplication
runs through security and caching processes until it gets to the assigned handler. - The handler
- A set of events
Goes through the recaching and logging events and sends the response back to the client.
One of the key features of Global.asax
file is that it can handle application events. The Global.asax
implementation is application-specific, whereas the module is much easier to use between applications. module also provides additional SoCs(Separation Of Concern) by enabling your ASP.NET MVC application to manage the request after it hits the handler rather than manipulating it prior to being handled by MvcHandler. By adding them to the global assembly cache and registering them in the Machine.config
file, you can reuse them across applications running on the same machine.
An HTTP handler is used to process individual endpoint requests. Unlike modules, only one handler is used to process a request. A handler must implement the IHttpHandler
interface. It has an IsReusable
property and a ProcessRequest(HttpContext)
method that gives the handler full access to the request’s context.
The <httpHandler>
configuration section is responsible for configuring the handler by configuring the verb, path, and type that directs what requests should go to the handler.
Asynchronous module example
private async Task ScrapePage(object caller, EventArgs e)
{
WebClient webClient = new WebClient();
var downloadresult = await webClient.DownloadStringTaskAsync("http://www.msn.com");
}
public void Init(HttpApplication context)
{
EventHandlerTaskAsyncHelper helper =
new EventHandlerTaskAsyncHelper(ScrapePage);
context.AddOnPostAuthorizeRequestAsync(
helper.BeginEventHandler, helper.EndEventHandler);
}
Making an HttpModule
asynchronous offers protection to your server and application as the primary thread passes the module control to another thread.
asynchronous handler example
public class NewAsyncHandler : HttpTaskAsyncHandler
{
public override async Task ProcessRequestAsync(HttpContext context)
{
WebClient webClient = new WebClient();
var downloadresult = await
webClient.DownloadStringTaskAsync("http://www.msn.com");
}
}
Choosing between modules and handlers in IIS
When a page is requested, HttpHandler
executes on the base of extension file names and on the base of verbs.
HTTP modules
are event-based and inject preprocessing logic before a resource is requested.
If the key consideration is the URL, you should use an HTTP handler. If you want to work on every request regardless of URL, and you are prepared to work with an event-driven framework, you should create an HTTP module. If you need the information available to you prior to it calling your ASP.NET MVC code, it should be a module. If you want special files to be handled differently, it should be a handler.