Showing posts with label Tech. Show all posts
Showing posts with label Tech. Show all posts

What Is Google Adsense and How Is It Having Impact on The Internet

 

What Is Google Adsense


If you look at the Internet a few years back, you’ll see that advertising was done in a way that was very similar to other types of media like television, or actually, more like what you see in a newspaper.

You’d enter a site, and in some location you’d get to see a banner (often these were quite numerous and very large), which would present and ad for whatever company was paying for adds on your space.

But there was one problem with this kind of advertising. It really wasn’t exploiting the fact that the adds weren’t in some newspaper, but were instead presented over the Internet.

You’ve probably noticed a lot of things like this over the pages you’ve browsed. You’re looking at an on-line shop, looking for a watch but you get a banner that advertises a car.

While you might, at some later point want to buy a car, right now you’re looking for a watch and it would have surely been nice if the banner were advertising a watch, because then you would have probably clicked it.

Well that’s also what the folks at Google thought of, so they came up with a killer idea. This is knows as Google AdSense, and it’s known as a targeted advertising program

What you do (as a web designer / website owner) is, instead of jumping through hoops to get some banner on your site that your visitors won’t even care about, is you just allocate some region of the screen.

You then sign up for the Google AdSense program, you insert a small snippet of code in your webpage and Google ensures that in the location you specify, a banner will appear, presenting adds relevant to the contents of your site.

It’s very easy for Google to do this because Google is a search engine company. It looks for the key words in your page, searches a database of websites to find the ones related to whatever is on your page and presto: a targeted ad.

You (the webmaster) get a fee for each visitor that clicks on an adsense banner on your site. Now that’s bound to happen more often then with a traditional banner because people are actually interested in what’s in that banner (otherwise, they wouldn’t be on your page would they?).

But, this also does wonders for the people who want to advertise. And it’s because of the same reason. The greatest thing about Google AdSense is that all the content in a banner is relevant.

This relevancy is the key to the programs success, and also the reason why everyone remains happy. The advertiser has a relevantly placed advert, the publisher earns money from their content and Google take their cut.

Of course, as always, Google has set some high standards for its AdSense program, in terms of looks and functionality. You can’t have more than two such banners on your website and Google only inserts text in these banners.

So an extra benefit is that AdSense advertising is a lot less obtrusive then regular advertising. But this also means you should position the banner better because it’s possible that visitors might miss it altogether.

So in the end, Google AdSense is an advertising program that is unique because the ads are relevant to the content on the site. Anyone that wants to advertise pays Google for it. Anyone who wants to place ads on their site does this through AdSense, getting paid by Google in the process.

All transactions are run through Google, and the advertisers and publishers get access to statistics which help them to understand and moderate the effectiveness of their campaign.

The whole process is elegant, simple and effective from anyone in the chain, from site visitors to advertisers, and it’s one of the reasons Google are known for their innovation and new thinking.

Definitive Technology Studio Advance 5.1 Channel Sound Bar with 9 Speakers

 

Definitive Technology Studio Advance 5.1 Channel Sound Bar with 9 Speakers | Includes an 8" Wireless Subwoofer | Built-in Chromecast, Bluetooth | HDMI ARC

GET IT ON AMAZON



Visit the Definitive Technology Store

3.8 out of 5 stars    105 ratings | 33 answered questions

Was: $649.00 Details

Price: $599.00

You Save: $50.00 (8%)

Style: 5.1 Channel Sound Bar


3.1 Channel Sound Bar


$999.00


 


3.1 Channel Soundbar + HDMI Cable


- -

 


5.1 Channel Sound Bar


$599.00


 


5.1 Channel Soundbar + HDMI Cable


- -

Connectivity Technology Bluetooth

Speaker Type Sound Bar with Subwoofer

Brand Definitive Technology

Model Name Studio Advance

Subwoofer Diameter 8 Inches

About this item

Setup in minutes: CONNECT TO Wi-Fi, DOWNLOAD THE GOOGLE HOME APP, update your device and enjoy.

HIGH PERFORMANCE 5.1 CHANNEL SOUNDBAR SYSTEM: Features 9 drivers (with Aluminum Dome Tweeters) packed with dedicated left, right & center channels, & powerful DSP-enhanced sound decoding. Enjoy sonic precision in all that you see, hear & feel

NEW HDMI VIDEO SECTION, WITH NO DOWNSCALING: With HDR10 and HLG support, watch movies in 4K with exceptional clarity, color and contrast. The picture quality is stunning with breathtakingly beautiful video reproduction

THE INCLUDED 8" WIRELESS SUBWOOFER produces deep, rumbling bass filling your bedroom or living room with rich and immersive sound

HDMI ARC FOR SMART TV CONNECTIVITY: Connect the Studio Advance with your Smart TV using a single HDMI cable and get complete functionality & control with your television remote

ENDLESS MUSIC STREAMING WITH CHROMECAST: Create a whole home audio system with Chromecast-enabled speakers in your Google Home app & wirelessly stream your favorite tracks through Pandora, Spotify and more with voice commands to control your music.

GET IT ON AMAZON

Outsourcing Everyday Jobs to Thousands of Transient Functional Containers

micahlerner.com

From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers

Published July 24, 2021

Found something wrong? Submit a pull request!

This week’s paper review is the second in a series on “The Future of the Shell” (Part 1, a paper about possible ways to innovate in the shell is here). As always, feel free to reach out on Twitter with feedback or suggestions about papers to read! These weekly paper reviews can also be delivered weekly to your inbox.

From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers

This week’s paper discusses gg, a system designed to parallelize commands initiated from a developer desktop using cloud functions - an alternative summary is that gg allows a developer to, for a limited time period, “rent a supercomputer in the cloud”.

While parallelizing computation using cloud functions is not a new idea on its own, gg focuses specifically on leveraging affordable cloud compute functions to speed up applications not natively designed for the cloud, like make-based build systems (common in open source projects), unit tests, and video processing pipelines.

What are the paper’s contributions?

The paper’s contains two primary contributions: the design and implementation of gg (a general system for parallelizing command line operations using a computation graph executed with cloud functions) and the application of gg to several domains (including unit testing, software compilation, and object recognition).

To accomplish the goals of gg, the authors needed to overcome three challenges: managing software dependencies for the applications running in the cloud, limiting round trips from the developer’s workstation to the cloud (which can be incurred if the developer’s workstation coordinates cloud executions), and making use of cloud functions themselves.

To understand the paper’s solutions to these problems, it is helpful to have context on several areas of related work:

  • Process migration and outsourcing: gg aims to outsource computation from the developer’s workstation to remote nodes. Existing systems like distcc and icecc use remote resources to speed up builds, but often require long-lived compute resources, potentially making them more expensive to use. In contrast, gg uses cloud computing functions that can be paid for at the second or millisecond level.
  • Container orchestration systems: gg runs computation in cloud functions (effectively containers in the cloud). Existing container systems, like Kubernetes or Docker Swarm, focus on the actual scheduling and execution of tasks, but don’t necessarily concern themselves with executing dynamic computation graphs - for example, if Task B’s inputs are the output of Task A, how can we make the execution of Task A fault tolerant and/or memoized.
  • Workflow systems: gg transforms an application into small steps that can be executed in parallel. Existing systems following a similar model (like Spark) need to be be programmed for specific tasks, and are not designed for “everyday” applications that a user would spawn from the command line. While Spark can call system binaries, the binary is generally installed on all nodes, where each node is long-lived. In contast, gg strives to provide the minimal dependencies and data required by a specific step - the goal of limiting dependencies also translates into lower overhead for computation, as less data needs to be transferred before a step can execute. Lastly, systems like Spark are accessed through language bindings, whereas gg aims to be language agnostic.
  • Burst-parallel cloud functions: gg aims to be a higher-level and more general system for running short-lived cloud functions than existing approaches - the paper cites PyWren and ExCamera as two systems that implement specific functions using cloud components (a MapReduce-like framework and video encoding, respectively). In contrast, gg aims to provide, “common services for dependency management, straggler mitigation, and scheduling.”
  • Build tools: gg aims to speed up multiple types of applications through parallelization in the cloud. One of those applications, compiling software, is addressed by systems like Bazel, Pants, and Buck. These newer tools are helpful for speeding up builds by parallelizing and incrementalizing operations, but developers will likely not be able to use advanced features of the aforementioned systems unless they rework their existing build.

Now that we understand more about the goals of gg, let’s jump into the system’s design and implementation.

Design and implementation of gg

gg comprises three main components:

  • The gg Intermediate Representation (gg IR) used to represent the units of computation involved in an application - gg IR looks like a graph, where dependencies between steps are the edges and the units of computation/data are the nodes.
  • Frontends, which take an application and generate the intermediate representation of the program.
  • Backends, which execute the gg IR, store results, and coalesce them when producing output.

The gg Intermediate Representation (gg IR) describes the steps involved in a given execution of an application. Each step is described as a thunk, and includes the command that the step invokes, environment variables, the arguments to that command, and all inputs. Thunks can also be used to represent primitive values that don’t need to be evaluated - for example, binary files like gcc need to be used in the execution of a thunk, but do not need to be executed. A thunk is identified using a content-addressing scheme that allows one thunk to depend on another (by specifying the objects array as described in the figure below).

Frontends produce the gg IR, either through a language-specific SDK (where a developer describes an application’s execution in code) or with a model substitution primitive. The model substitution primitive mode uses gg infer to generate all of the thunks (a.k.a. steps) that would be involved in the execution of the original command. This command executes based on advanced knowledge of how to model specific types of systems - as an example, imagine defining a way to process projects that use make. In this case, gg infer is capable of converting the aforementioned make command into a set of thunks that will compile independent C++ files in parallel, coalescing the results to produce the intended binary - see the figure below for a visual representation.

Backends execute the gg IR produced by the Frontends by “forcing” the execution of the thunk that corresponds to the output of the application’s execution. The computation graph is then traced backwards along the edges that lead to the final output. Backends can be implemented on different cloud providers, or even use the developer’s local machine. While the internals of the backends may differ, each backend must have three high-level components:

  • Storage engine: used to perform CRUD operations for content-addressable outputs (for example, storing the result of a thunk’s execution).
  • Execution engine: a function that actually performs the execution of a thunk, abstracting away actual execution. It must support, “a simple abstraction: a function that receives a thunk as the input and returns the hashes of its output objects (which can be either values or thunks)”. Examples of execution engines are “a local multicore machine, a cluster of remote VMs, AWS Lambda, Google Cloud Functions, and IBM Cloud Functions (OpenWhisk)”.
  • Coordinator: The coordinator is a process that orchestrates the execution of a gg IR by communicating with one or more execution engines and the storage engine. It provides higher level services like making scheduling decisions, memoizing thunk execution (not rerunning a thunk unnecessarily), rerunning thunks if they fail, and straggler mitigation.

Applying and evaluating gg

The gg system was applied to, and evaluated against, four use cases: software compilation, unit testing, video encoding, and object recognition.

For software compilation, FFmpeg, GIMP, Inkscape, and Chromium were compiled either locally, using a distributed build tool (icecc), or with gg. For medium-to-large programs, (Inkscape and Chromium), gg performed better than the alternatives with an AWS Lambda execution engine, likely because it is better able to handle high degrees of parallelism - a gg based compilation is able to perform all steps remotely, whereas the two other systems perform bottlenecking-steps at the root node. The paper also includes an interesting graphic outlining the behavior of gg worker’s during compilation, which contains an interesting visual of straggler mitigation (see below).

For unit testing, the LibVPX test suite was built in parallel with gg on AWS Lambda, and compared with a build box - the time differences between the two strategies was small, but that authors argue that the gg based solution was able to provide results earlier because of its parallelism.

For video encoding, gg performed worse than an optimized implementation (based on ExCamera), although the gg based system introduces memoization and fault tolerance.

For object recognition, gg was compared to Scanner, and observed significant speedups that the authors attribute to gg’s scheduling algorithm and removing abstraction in Scanner’s design.

Conclusion

While gg seems like an exciting system for scaling command line applications, it may not be the best fit for every project (as indicated by the experimental results) - in particular, gg seems well positioned to speed up traditional make-based builds without requiring a large-scale migration. The paper authors also note limitations of the system, like gg’s incompatibility with GPU programs - my previous paper review on Ray seems relevant to adapting gg in the future.

A quote that I particularly enjoyed from the paper’s conclusion was this:

As a computing substrate, we suspect cloud functions are in a similar position to Graphics Processing Units in the 2000s. At the time, GPUs were designed solely for 3D graphics, but the community gradually recognized that they had become programmable enough to execute some parallel algorithms unrelated to graphics. Over time, this “general-purpose GPU” (GPGPU) movement created systems-support technologies and became a major use of GPUs, especially for physical simulations and deep neural networks. Cloud functions may tell a similar story. Although intended for asynchronous microservices, we believe that with sufficient effort by this community the same infrastructure is capable of broad and exciting new applications. Just as GPGPU computing did a decade ago, nontraditional “serverless” computing may have far-reaching effects.

Thanks for reading, and feel free to reach out with feedback on Twitter - until next time!

Follow me on Twitter or subscribe below to get future paper reviews. Published weekly.

 
Found something wrong? Submit a pull request!

The future of e-commerce, Globally

In 2016, more than 20 years after Amazon’s founding and 10 years since Shopify launched, it would have been easy to assume e-commerce penetration (the percentage of total retail spend where the goods were bought and sold online) would be over 50%.

But what we found was shocking: The U.S. was only approximately 8% penetrated — only 8% for arguably the most advanced economy in the world!

We’ve had a close eye on the rate of e-commerce penetration globally ever since. Despite e-commerce growth skyrocketing over the past year, the reality is the U.S. has still only reached an e-commerce penetration rate of around 17%. During the last 18 months, we’ve closed the gap to South Korea and China’s e-commerce penetration of more than 25%, but there is still much progress to be made.

Image Credits: Accel

It’s clear that we are still in the early days of this megatrend and it is our strong conviction that it is inevitable that we will get to a point where at least half of every retail dollar is spent online over the next decade.

Below are five key predictions for what this road to further penetration will hold.

D2C retail will accelerate as merchants seek independence

Marketplaces have forged the path for e-commerce adoption among merchants of all sizes. They have raised significant capital and made the necessary investments in payments and logistics infrastructure, often subsidizing the consumer experience with free shipping or discounts to get them comfortable buying online.

The balance of power has shifted toward merchants, who previously didn’t have the picks and shovels to build their own e-commerce capabilities.

In recent years, merchants have pursued options aside from these marketplace aggregators. They have sought independence, opting to pay 5%-10% of their gross merchandise value (GMV) on their own technology infrastructure rather than paying the 6% to 45% (average of about 15%) in marketplace fees. Most importantly, they have prioritized owning the relationship with their end customers, given that customer loyalty and lifetime value is becoming ever more important in a hypercompetitive online market.


Why argued that edtech needs to think bigger in order to stay relevant after the pandemic

At the end of 2020, I argued that edtech needs to think bigger in order to stay relevant after the pandemic. I urged founders to think less about how to bundle and unbundle lecture experience, and more about how to replace outdated systems and methods with new, tech-powered solutions. In other words, don’t simply put engaging content on a screen, but innovate on what that screen looks like, tracks and offers.

A few months into 2021, the exit environment in edtech...feels like it’s doing exactly that. The same startups that hit billion and multi-billion valuations during the pandemic are scooping up new talent to broaden their service offerings.

Ruben Harris, the founder of Career Karma, a platform that matches aspiring coding professionals to bootcamps, put together a massive report recently with his team to talk about the pandemic’s impact on the bootcamp market.

James Gallagher, the author of the report, tells me:

It is important to note that the full potential of bootcamps has not yet been realised. We are now seeing more exploration of niches like technology sales which provide gateways into new careers in tech for people who otherwise may not have been able to acquire training. To scale such models, new businesses will need venture capital.

He went on to explain how a notable acquisition from 2020 was K12 scooping up Galvanize, “which would give K12 exposure into corporate training and the coding bootcamp space, a market outside of K12's focus at the moment.”

To me this report signal two things: the financial interest in boot camps isn’t simply stemming from other bootcamps (although that is happening), but it’s surprising partnerships. Leaving this subsector, we see creative acquisitions such as a Roblox for edtech buying a language learning tool, and a startup known for flashcards scooping up a tech tutoring service

Find Out What The New Headless CMS Is Capable Of

The Best Open Source Headless CMS Software

 · Open Source Zone · Review
 
 
 Save
 

Traditional content management systems (CMS) are built around serving content with a web-oriented framework combining both the frontend and backend. However, this monolithic approach does not support modern web environments. The headless CMS addresses this issue by providing a decoupled approach to content management. 

The headless CMS provides a backend to manage your content while providing an API to serve the content. This API allows developers to use any presentation layer to effectively display content across multiple channels. It also offers them unlimited options to structure and deliver content. 

If you’re considering using a headless CMS to scale content management across channels, you can choose an open source headless CMS or a headless CMS from a software-as-a-service (SaaS) provider. In this article, we will present the pros and cons of each. We will then present the best open source headless CMS options.

Open Source Vs SaaS Headless CMS

Let’s look at the core factors to consider when comparing and contrasting open source and SaaS options for a headless CMS.

Ease of Implementation

The first consideration is the ease of getting the service up and running. With most open source options, you must set up the infrastructure first and then configure the functionality. Even though this provides a high level of control over your deployment, it’s time consuming and requires technical expertise.

On the other hand, SaaS solutions provide a more user-friendly configuration. You can launch your headless CMS with minimal effort. SaaS options also alleviate infrastructure management requirements, allowing developers to focus more on CMS customization.

Platform Maintenance

Saas platforms do not require much maintenance. The only requirement is to maintain the content within the headless CMS. The platform provider is responsible for the underlying infrastructure and performance. Additionally, you will have a dedicated support option to contact for mitigating any issues within the platform.

In an open-source platform, you are responsible for infrastructure maintenance, and the only support option is asking the open-source community that contributes to the project. This approach is riskier, but provides greater control over platform performance and allows you to fine-tune the platform based on your unique needs.

Security

A SaaS provider manages the security and compliance of the headless CMS. In an open-source implementation, most projects are dependent on the contributors to adhere to compliance and security standards. Therefore, the security implications are solely on the shoulders of developers. Like platform maintenance, this approach is riskier and more time-consuming.

Customization

An open-source platform is only limited by the developer’s imagination and skillset to customize the application to suit any requirement. You have complete access to the source code, and you can even add features or extend existing features. In contrast, a SaaS platform is limited to the feature set and customization options offered by the provider.

Integrations

This is the point where both open source and SaaS platforms offer an equal amount of flexibility. Both options provide the ability to integrate with third-party platforms like payment gateways, ERP platforms, message brokers, social media, and more. The only difference is that, in an open-source platform, you have the option to create a new connector to facilitate a new integration with the help of the community.

The summary of all the points above is illustrated in the table below.

 

Open Source

SaaS

Infrastructure Configuration

Yes

No

Infrastructure Maintenance

Yes

No

Setup Difficulty

Medium/High

Low

Technical Expertise

High

Low/Medium

Security and Compliance

Unmanaged (User Responsibility)

Managed (Provider Responsibility)

Features

Community Dependent

Platform Dependent

Customization

Yes (High)

Yes (Granularity Dependent on Provider)

Integrations

Yes

Yes

Open-Source Headless CMS Options

Here we will explore some of the best available open-source headless CMS options. We will mainly focus on the features offered by each option and the differences between them. The following options will be used for this comparison.

Note: GraphCMS is not open source but we have included it because it’s a popular tool and there is a free community version.

The following table illustrates a comparison of the main features of some of the best open-source headless CMS options.

 

Ghost

strapi

cockpit

Apostrophe

Directus

GraphCMS

Rich Content Editing

Secure Authentication

Email & Messaging

3

User Analytics

4

3

Multilingual Support

RESTful API

GraphQL 

SQL Support

2

1

NoSQL Support

2

1

Webhooks

4

Docker/K8 Support

CLI

4

Premium Offering

  1. Offers their own database solutions
  2. Limited to SQLite and MongoDB
  3. Limited Support
  4. Through Plugins or Extensions

Ghost

The Ghost platform (GitHub) under the Ghost Foundation is a headless CMS focused on providing a publishing platform from individuals to businesses. The platform is developed using Javascript (Node.js) and is distributed under the MIT license. Ghost offers a suite of modern publishing tools, a fully-featured content editor, multi-author, multi-language content creation, and chronological content.

Having its primary focus on publishing, Ghost provides inbuilt support for subscription and membership management. Additionally, Ghost provides support for global payments through Stripe and provides a user analytics function.

Ghost can also be integrated with existing tools like Zapier, Slack, and Mailchimp to extend the workflows and provide a unified experience. It is used by prominent services like Mozilla, DigitalOcean, Airtable, and Tinder to power their publishing platforms.

Moreover, Ghost provides a JavaScript library to interact with the Content API, making it easier to interact with the content without creating manual API calls.

Sample Query

JavaScript
{ posts.forEach((post) => { console.log(post.title); }); }) .catch((err) => { console.error(err); });" data-lang="text/javascript" style="box-sizing: border-box;">

1
const api = new GhostContentAPI({
2
    // Ghost DEMO site
3
    host: 'https://demo.ghost.io',
4
    // Authentication via API Keys
5
    key: '22444f78447824223cefc48062',
6
    version: "v3"
7
  });
8
  
9
  // fetch 5 posts, including related tags and authors
10
  api.posts.browse({
11
      filter: 'tag:fiction+tag:fiction'
12
  })
13
  .then((posts) => {
14
      posts.forEach((post) => {
15
          console.log(post.title);
16
      });
17
  })
18
  .catch((err) => {
19
      console.error(err);
20
  });



Strapi

Strapi (GitHub) is an open source headless CMS designed to work with all Jamstack sites. This developer-focused platform gives developers a flexible and extensible platform to manage and distribute content using their favorite tools and platforms. Strapi supports both relational and non-relational databases and is frontend agnostic, allowing developers to use it with any frontend framework (React, Angular, Vue).

Strapi can be used to build simple websites, mobile applications, to fully-featured e-commerce platforms. It also provides support for both RESTful and GraphQL to interact with the API. Its powerful CLI module lets you create and manage projects easily within seconds.

This platform can be further extended using third-party integrations like Redism, Sentrym, and Mailgun. Another major feature of Strapi is that it’s secure by default, providing various configuration options like CROS, CSP, and XSS out of the box. The following code samples show you how to query the Strapi API using the Axios library.

Sample Query

JavaScript

1
import axios from 'axios';
2
 
3
  const token = '';
4
  
5
  // Request API.
6
  axios
7
    .get('http://localhost:1337/posts', {
8
      headers: {
9
        Authorization: `Bearer ${token}`,
10
      },
11
    })
12
    .then(response => {
13
      // Handle success.
14
      console.log('Data: ', response.data);
15
    })
16
    .catch(error => {
17
      // Handle error.
18
      console.log('An error occurred:', error.response);
19
    });



Cockpit

Cockpit (GitHub) is one of the simplest headless CMS platforms. With its API-first approach, Cockpit aims to provide a simple yet powerful backend to manage content that can be delivered through multiple channels.

Developed in 2013, the goal of the Cockpit project was to provide a comprehensive platform for managing structured and reusable content while maintaining the simplicity of functionality needed to provide data through the API in JSON format. The main features of the Cockpit CMS are the ability to manage flexible content modes, a simple uncluttered backend user interface, and high scalability.

The Cockpit CMS can be easily integrated into any project without having to build scripts or include any advanced PHP libraries. It is also tested on both Apache and Nginx servers for smooth operation. 

The interactions with the API are authenticated using an API token. The following example demonstrates a simple GET request to retrieve posts from the Cockpit CMS using the fetch method.

Sample Query

JavaScript
1
fetch('/api/collections/get/posts?token=', {
2
        method: 'post',
3
        headers: { 'Content-Type': 'application/json' },
4
        body: JSON.stringify({
5
            filter: {published:true},
6
            fields: {fieldA: 1, fieldB: 1},
7
            limit: 10,
8
            skip: 5,
9
            sort: {_created:-1},
10
        })
11
    })
12
    .then(res=>res.json())
13
    .then(res => console.log(res));



ApostropheCMS

Unlike all other open source CMS software, Apostrophe (GitHub) is a fully-featured open source CMS built with Node.js and . Apostrophe offers a headless option through apostrophe-headless, enabling developers to integrate it with any jamstack and provide content through a RESTful API. This combined approach allows developers to mix and match the functionality of a traditional CMS and a headless CMS to meet their exact requirement.

Apostrophe is geared towards a rapid, agile development cycle reducing the “software time to market.” Furthermore, Apostrophe breaks its functionality into different components, enabling developers to customize the CMS platform and build a solution that suits their needs.

You can extend Apostrophe using extensions and Integrations such as Salesforce Personas, Redis Caching, Stagecoach, Webhook notifications, and more.

As apostrophe-headless is an extension to the Apostrophe, you need to activate the package by installing the npm package and activating and exposing the required content using the REST API. The authentication can be done via API Keys or Bearer Tokens.

Sample Configuration

JavaScript
1
 modules: {
2
        'apostrophe-headless': {
3
            apiKeys: [ '' ],
4
        },
5
        'products': {
6
            //Other configurations
7
            // Expose via API
8
            restApi: true
9
        }
10
      }



The code below enables you to query for data in Apostrophe using the REST API.

JavaScript
1
// Get all products
2
const { results } = await $axios.$get('/api/v1/products')



Directus

Being the most data-driven headless CMS platform, Directus (GitHub) is all about data. This open-source headless CMS platform is built using Node.js and utilizes Vue.js to provide the administration interface. It’s distributed under the GPLv3 license.

Directus can connect to a preexisting database or create a new database to compliment your project and provide the necessary content using the RESTful or GraphQL API. This CMS platform has the widest support for relational database software with free solutions like MariaDB and SQLite as well as commercial solutions like MS-SQL and Oracle DB. This enables developers to select the ideal database to match their requirements.

Directus provides a fully-featured administrative interface to manage the above-mentioned databases. Another major factor of this headless CMS is its sheer customizability from full-text search, translations, workflows, and event hooks.

Furthermore, Directus supports creating custom API endpoints and offers a command-line interface to create and manage projects. It also provides a JavaScript SDK which acts as a wrapper around the Axios library tailored to interact with the API. 

You can see how to utilize the Directus SDK to authenticate a user and return required items in the following code block. With the help of SDK, Directus simplifies the development lifecycle by offering prebuilt functionality for API interactions.

Sample Query

JavaScript

1
import DirectusSDK from '@directus/sdk-js';
2
 
3
    const directus = new DirectusSDK('https://api.example.com/');
4
    
5
    async function getData() {
6
        // Authentication using email and password.
7
        await directus.auth.login({ email: 'admin@example.com', password: 'password' });
8
    
9
        // Obtain all the articles.
10
        return await directus.items('articles').read();
11
    }
12
    
13
    getData();



GraphCMS

GraphCMS is a native GraphQL headless content management system. The objective of this headless cms platform is to provide users with an exceptional digital experience while simplifying content management. GraphCMS is frontend agnostic and is developed by GraphCMS GmbH.

One of the best features of GraphCMS is its Digital Asset Handling which allows developers to transform their digital assets into different formats and structures. For instance, GraphCMS natively supports resizing image assets and converting files into different types.

Another powerful feature of GraphCMS is its approach to content personalization and flexible content modeling. It enables developers to create dynamic content for target audiences and tailor the content to match the user requirements by modeling the content. 

On top of that, GraphCMS uses Authentication Tokens to authorize the API requests. The following code block illustrates a simple query to obtain published posts about dogs, ordered by their created timestamp.

Sample Query

JavaScript
1
{
2
        posts(
3
          where: { title_contains: "Dog" }
4
          orderBy: createdAt_DESC
5
          stage: PUBLISHED
6
        ) {
7
          id
8
        }
9
    }  



SaaS Headless CMS Options

Going open-source is an option only if you have enough time to invest in configuring and setting up the headless CMS solution to meet your exact needs. If that is not the case, the best option is to choose a SaaS platform. It provides you with an out-of-the-box solution that can be adapted to fit your requirements with just a few modifications or configuration changes.

Some headless CMS options by SaaS providers include Fabric XM, Contentful, and Contentstack.

Topics:
 
HEADLESS CMS, CMS, HEADLESS E-COMMERCE, HEADLESS, OPEN SOURCE

Published at DZone with permission of Shanika WIckramasinghe. See the original article here. 

Opinions expressed by DZone contributors are their own.

Chicken Curry With Braised Rice and Green Chilli Sauce

    AMAZON. COM 👉Main Homepage "All Categories"👉 https://amzn.to/46e2mBZ VISIT THIS LINK TO DONATE 👉 https://selar.co/sh...